ACL-OCL / Base_JSON /prefixD /json /D13 /D13-1012.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D13-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:42:35.465395Z"
},
"title": "Modeling Scientific Impact with Topical Influence Regression",
"authors": [
{
"first": "James",
"middle": [],
"last": "Foulds",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Irvine"
}
},
"email": "jfoulds@ics.uci.edu"
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Irvine"
}
},
"email": "smyth@ics.uci.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "When reviewing scientific literature, it would be useful to have automatic tools that identify the most influential scientific articles as well as how ideas propagate between articles. In this context, this paper introduces topical influence, a quantitative measure of the extent to which an article tends to spread its topics to the articles that cite it. Given the text of the articles and their citation graph, we show how to learn a probabilistic model to recover both the degree of topical influence of each article and the influence relationships between articles. Experimental results on corpora from two well-known computer science conferences are used to illustrate and validate the proposed approach.",
"pdf_parse": {
"paper_id": "D13-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "When reviewing scientific literature, it would be useful to have automatic tools that identify the most influential scientific articles as well as how ideas propagate between articles. In this context, this paper introduces topical influence, a quantitative measure of the extent to which an article tends to spread its topics to the articles that cite it. Given the text of the articles and their citation graph, we show how to learn a probabilistic model to recover both the degree of topical influence of each article and the influence relationships between articles. Experimental results on corpora from two well-known computer science conferences are used to illustrate and validate the proposed approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Scientific articles are not created equal. Some articles generate entire disciplines or sub-disciplines of research, or revolutionize how we think about a problem, while others contribute relatively little. When we are first introduced to a new area of scientific study, it would be useful to automatically find the most important articles, and the relationships of influence between articles. Understanding the impact of scientific work is also crucial for hiring decisions, allocation of funding, university rankings and other tasks that involve the assessment of scientific merit. If scientific works stand on the shoulders of giants, we would like to be able to find the giants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The importance of a scientific work has previously been measured chiefly through metrics derived from citation counts, such as impact factors. However, citation counts are not the whole story. Many citations are made in passing, are relevant to only one section of an article, or make no impact on a work but are referenced out of \"politeness, policy or piety\" (Ziman, 1968) . In reality, scientific impact has many dimensions. Some articles are important because they describe scientific discoveries that alter our understanding of the world, while some develop essential tools and techniques which facilitate future research. Other articles are influential because they introduce the seeds of new ideas, which in turn inspire many other articles.",
"cite_spans": [
{
"start": 361,
"end": 374,
"text": "(Ziman, 1968)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we introduce topical influence, a quantitative metric for measuring the latter type of scientific influence, defined in the context of an unsupervised generative model for scientific corpora. The model posits that articles \"coerce\" the articles that cite them into having similar topical content to them. Thus, articles with higher topical influence have a larger effect on the topics of the articles that cite them. We model this influence mechanism via a regression on the parameters of the Dirichlet prior over topics in an LDA-style topic model. We show how the models can be used to recover meaningful influence scores, both for articles and for specific citations. By looking not just at the citation graph but also taking into account the content of the articles, topical influence can provide a better picture of scientific impact than simple citation counts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bibliometrics, the quantitative study of scientific literature, has a long history. One example of a widelyused bibliometric measure of interest is the impact factor of a publication venue for a given year, defined to be the average number of times articles from that venue, published in the previous two years, were cited in that year. However, the quality of articles in a given publication venue can vary wildly, and it is difficult to compare impact factors between different disciplines of study. The number of citations an article receives is an indication of importance, but this is confounded by the unknown function of each citation. Measures of importance such as PageRank (Brin and Page, 1998) can be derived recursively from the citation graph. Such graphbased measures do not in general make use of the textual content of the articles, although it is possible to apply them to graphs where the edges between articles are determined based on the similarity of their content instead of the citation graph (Lin, 2008) .",
"cite_spans": [
{
"start": 683,
"end": 704,
"text": "(Brin and Page, 1998)",
"ref_id": "BIBREF2"
},
{
"start": 1016,
"end": 1027,
"text": "(Lin, 2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "A variety of methods have previously been proposed for analyzing text and citation links together, such as modeling connections between words and citations Cohn and Hofmann (2001) , classifying citation function (Teufel et al., 2006) , and jointly modeling citation links and document content (Chang and . However, these methods do not directly measure article importance or influence relationships between articles given their citations.",
"cite_spans": [
{
"start": 156,
"end": 179,
"text": "Cohn and Hofmann (2001)",
"ref_id": "BIBREF3"
},
{
"start": 212,
"end": 233,
"text": "(Teufel et al., 2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "More closely related to the present work, Dietz et al. (2007) proposed the citation influence model (CIM). Building on the latent Dirichlet allocation (LDA) framework, CIM assumes that each word is drawn by first selecting either (a) the distribution over topics of a cited article (with probability proportional to the influence weight of that article on the present article) or (b) a novel topic distribution, and drawing a topic from the selected distribution, then finally drawing the word from the chosen topic. 1 In their approach, every word is assigned an extra latent variable, namely the cited article whose topic distribution the topic was drawn from. For the model proposed in this paper, we do not need to introduce these additional latent variables, which leads to a simpler latent representation and fewer variables to sample during inference. Dietz et al. (2007) also assume that the citation graph is bipartite, consisting of one set of citing articles and one set of cited articles-in contrast, our proposed models can handle arbitrary citation graphs in the form of directed 1 A somewhat similar model was also proposed by He et al. (2009) acyclic graphs (DAGs). While both the CIM and our approach can identify the influence of specific citations between articles, our model can also infer how influential each article is overall, and provides a flexible modeling framework which can handle different assumptions about influence.",
"cite_spans": [
{
"start": 42,
"end": 61,
"text": "Dietz et al. (2007)",
"ref_id": "BIBREF4"
},
{
"start": 859,
"end": 878,
"text": "Dietz et al. (2007)",
"ref_id": "BIBREF4"
},
{
"start": 1142,
"end": 1158,
"text": "He et al. (2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Another related method is due to Shaparenko and Joachims (2009) , who propose a mixture modeling approach for the detection of novel text content. Nallapati et al. 2011introduced TopicFlow, a PLSA-based model for the flow of topics in a document network. In their model, citing articles \"vote\" on each cited article's topic distribution in retrospect, via a network flow model. Since this voting occurs in time-reversed order, it does not describe an influence mechanism and is not a generative model that can simulate or predict new documents.",
"cite_spans": [
{
"start": 33,
"end": 63,
"text": "Shaparenko and Joachims (2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Finally, the document influence model of Gerrish and Blei (2010) can be viewed as orthogonal to this work, in that it models the impact of documents on topics over time (specifically, how topics change over time) rather than how articles influence the specific articles that cite them.",
"cite_spans": [
{
"start": 41,
"end": 64,
"text": "Gerrish and Blei (2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Scientific research is seldom performed in a vacuum. New research builds on the research that came before it. Although there are many aspects by which the importance of a scientific article can be judged, in this work we are interested in the extent to which a given article has or will have subsequent articles that build upon it or are otherwise inspired by its ideas. We begin by defining topical influence, a quantitative measure for this type of influence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical Influence Regression",
"sec_num": "3"
},
{
"text": "It is not immediately obvious how one might quantify such a notion of \"idea-based\" influence. However, the mechanism used in the scientific community for giving credit to prior work is citation. The presence of a citation from article b to article a therefore indicates that article b may have been influenced by the ideas in article a, to some unknown extent. We hypothesize that the extent of this influence manifests itself in the language of b. Using latent Dirichlet allocation (LDA) topics as a concrete proxy for the vague notion of \"ideas\", we define the topical influence of a to be the extent to which article a coerces the documents which cite it to have similar topic distributions to it. Topical influence will be made precise in the context of a generative model for scientific corpora, conditioned on the citation graph, called topical influence regression (TIR).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "The proposed model extends the LDA framework of Blei et al. (2003) . In LDA, each word w",
"cite_spans": [
{
"start": 48,
"end": 66,
"text": "Blei et al. (2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "(d) i of each document d is assigned to one of K latent top- ics, z (d) i . Each topic \u03a6 (k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "is a discrete distribution over words. Document d has a distribution over topics \u03b8 (d) , which can be viewed as a \"location in topic space\" summarizing its thematic content. The \u03b8 (d) 's have a Dirichlet prior distribution with parameters \u03b1 = [\u03b1 1 , \u03b1 2 , . . . , \u03b1 K ] \u22ba . Although the \u03b1 k 's are often set to be equal, representing a relatively uninformative prior over the \u03b8's, a unique \u03b1 (d) for each document can also be used to encode prior information such as the effect of other variables on the topics of that document (Mimno and McCallum, 2008) . In our case, we want to model the influence that a document has on the topic distributions of the documents that cite it. A natural way to encode such influence, then, is to allow documents to affect the value of \u03b1 (d) for each document d that cites them. Accordingly, we model each article d as having a latent, non-negative \"topical influence\" value l (d) . Let n (d) be number of words in article d, n",
"cite_spans": [
{
"start": 83,
"end": 86,
"text": "(d)",
"ref_id": null
},
{
"start": 180,
"end": 183,
"text": "(d)",
"ref_id": null
},
{
"start": 392,
"end": 395,
"text": "(d)",
"ref_id": null
},
{
"start": 528,
"end": 554,
"text": "(Mimno and McCallum, 2008)",
"ref_id": "BIBREF7"
},
{
"start": 772,
"end": 775,
"text": "(d)",
"ref_id": null
},
{
"start": 911,
"end": 914,
"text": "(d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "(d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "k be the number of words assigned to topic k, and let C (d) be the set of articles that d cites. We model \u03b1 (d) as",
"cite_spans": [
{
"start": 108,
"end": 111,
"text": "(d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 (d) = c\u2208C (d) l (c)z(c) + \u03b1 ,",
"eq_num": "(1)"
}
],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "wherez (c) = 1 n (c) [n (c) 1 , . . . , n",
"eq_num": "(c)"
}
],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "K ] \u22ba is the normalized histogram of topic counts for document c, and \u03b1 is a constant for smoothing. Since thez (c) 's sum to one, the topical influence l (c) of article c can be interpreted as the number of words of precision that it adds to the prior of the topic distributions of each document that cites it. As we increase l (c) , the articles that cite c become more likely to have similar topic proportions to it. Thus, l (c) encodes the degree to which article c influences the topics of each of the articles that cite it. From another perspective, marginalizing out \u03b8 (d) , we can view the topic counts (in the standard LDA",
"cite_spans": [
{
"start": 112,
"end": 115,
"text": "(c)",
"ref_id": null
},
{
"start": 155,
"end": 158,
"text": "(c)",
"ref_id": null
},
{
"start": 576,
"end": 579,
"text": "(d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "z (d) i w (d) i n (d) \u03b8 (d) \u03b1 (d) l (d) Articles that a cites z (a) i w (a) i n (d) \u03b8 (a) \u03b1 (a) l (a) Article a z (d) i w (d) i n (d) \u03b8 (d) \u03b1 (d) l (d) Articles that cite a \u03b2 K \u03a6 (k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "Articles that d cites k (possibly fractional) balls of each color k \u2208 {1, . . . , K} initially in the urn. For each word, a ball is drawn randomly from the urn and the topic assignment is determined according to its color k. The ball is replaced in the urn, along with a new ball of color k. In our model, for each article c cited by article d we place l (c) balls, with colors distributed according toz (c) , into article d's urn initially. Thus, article d's topic assignments are more likely to be similar to those of the more influential articles that it cites. The total number of balls that d added to other articles' urns,",
"cite_spans": [
{
"start": 404,
"end": 407,
"text": "(c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "T (d) b:d\u2208C (b) l (d) = l (d) {b : d \u2208 C (b) } (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "measures the total impact (in a topical sense) of the article. We refer to this as total topical influence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical Influence",
"sec_num": "3.1"
},
{
"text": "The full assumed generative process for articles in this model begins with a directed acyclic citation graph G = {V, E}. Intuitively, citation graphs are typically DAGs because articles can normally only cite articles that precede them in time. We assume that G is a DAG so that influence relationships are consistent with some temporal ordering of the articles, and so that the resulting model is a Bayesian network. Here, each vertex v i corresponds to an ar-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "ticle d i , edge e = (v 1 , v 2 ) \u2208 E IFF d 1 is cited by d 2 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "and vertices (articles) are numbered in a topological ordering with respect to G. Such an ordering exists because G is a DAG. We model each article d's word vector w (d) as being generated in topological sequence, similarly to LDA but with its prior over topic distribution being Dirichlet(\u03b1 (d) ), as given by Equation 1. Note that each \u03b1 (d) is a function of the topics of the documents that it cites, parameterized by their topical influence values. We therefore call this model topical influence regression (TIR). The TIR model provides us with topical influence scores for each article, but it does not tell us about topical influence relationships between specific pairs of cited and citing articles. To model such relationships, we can consider a hierarchical extension to TIR, with edge-wise topical influences l (c,d) for each edge",
"cite_spans": [
{
"start": 166,
"end": 169,
"text": "(d)",
"ref_id": null
},
{
"start": 292,
"end": 295,
"text": "(d)",
"ref_id": null
},
{
"start": 340,
"end": 343,
"text": "(d)",
"ref_id": null
},
{
"start": 821,
"end": 826,
"text": "(c,d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(c, d) of the citation graph, l (c,d) \u223c TruncGaussian(l (c) , \u03c3, l (c,d) \u2265 0). In this case, \u03b1 (d) = c\u2208C (d) l (c,d)z(c) + \u03b1 .",
"eq_num": "(3)"
}
],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "This hierarchical setup allows us to continue to infer article-level topical influences, and provides a mechanism for sharing statistical strength between influences associated with one cited article. We shall refer to the model with influences on just the nodes (articles) as TIR, and the hierarchical extension with influences on the edges as TIRE. The graphical model for TIR is given in Figure 1 , and the generative process is detailed in the following pseudocode:",
"cite_spans": [],
"ref_spans": [
{
"start": 391,
"end": 399,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "\u2022 For each topic k \u2022 Sample the topic \u03a6 (k) \u223c Dirichlet(\u03b2) \u2022 For each document d, in topological order",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "\u2022 Sample an influence weight,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "l (d) \u223c Exponential(\u03bb) \u2022 If using the TIRE model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "\u2022 For each cited document c \u2208 C (d) \u2022 Draw edge influence weight,",
"cite_spans": [
{
"start": 32,
"end": 35,
"text": "(d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "l (c,d) \u223c TruncGauss(l (c) , \u03c3, l (c,d) \u2265 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "\u2022 Assign a prior over topics via",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "\u03b1 (d) = c\u2208C (d) l (c)z(c) + \u03b1 (TIR), or \u03b1 (d) = c\u2208C (d) l (c,d)z(c) + \u03b1 (TIRE)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "\u2022 Sample a distribution over topics,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "\u03b8 (d) \u223c Dirichlet(\u03b1 (d) ) \u2022 For each word i in document d \u2022 Sample a topic z (d) i \u223c Discrete(\u03b8 (d) ) \u2022 Sample a word w (d) i \u223c Discrete(\u03a6 (z (d) i ) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Topical Influence Regression",
"sec_num": "3.2"
},
{
"text": "The TIR model can be viewed as an adaption of the Dirichlet-multinomial regression (DMR) framework of Mimno and McCallum (2008) to model topical influence. DMR also endows each document with its own unique \u03b1 (d) , but with \u03b1",
"cite_spans": [
{
"start": 102,
"end": 127,
"text": "Mimno and McCallum (2008)",
"ref_id": "BIBREF7"
},
{
"start": 208,
"end": 211,
"text": "(d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relationship to Dirichlet-Multinomial Regression",
"sec_num": "3.3"
},
{
"text": "(d) k = exp(x (d)\u22ba \u03bb k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationship to Dirichlet-Multinomial Regression",
"sec_num": "3.3"
},
{
"text": "being a function of the observed feature vector x (d) parameterized by regression coefficients \u03bb. The DMR model can also be applied to text corpora with citation information, by setting the feature vectors to be binary indicators of the presence of a citation to each article. TIR differs in that the functional form of the regression is parameterized in a way that directly models influence, and also differs in that the regression takes advantage of the content of the cited articles via their topic assignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationship to Dirichlet-Multinomial Regression",
"sec_num": "3.3"
},
{
"text": "Because an article's prior over topic distributions depends on the topic assignments of the articles that it cites, TIR induces a network of dependencies between the topic assignments of the documents. Specifically, if we collapse out \u0398, the dependencies between the z's of each document form a Bayesian network whose graph is the citation graph. In contrast, DMR treats the documents as conditionally independent given their citations, and does not exploit their content in the regression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationship to Dirichlet-Multinomial Regression",
"sec_num": "3.3"
},
{
"text": "To illustrate this, Figure 2 shows an example citation graph and the resulting Bayesian network. In the figure, an edge in (a) from c to d corresponds to a citation of c by d. Conditioned on the topics, the dependence relationships between z nodes in (b) follow the same structure as the citation graph.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 28,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Relationship to Dirichlet-Multinomial Regression",
"sec_num": "3.3"
},
{
"text": "We perform inference using a Markov chain Monte Carlo technique.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "We use a collapsed Gibbs sampling approach analogous to Griffiths and Steyvers (2004) , integrating out \u0398 and \u03a6. The update equation for the topic assignments is P r(z",
"cite_spans": [
{
"start": 56,
"end": 85,
"text": "Griffiths and Steyvers (2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "z (1) z (2) z (3) z (4) z (6) z (5) \u03a6 K w (1) w (2) w (3) w (4) w 5) w (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(d) i = k|z \u2212(d,i) , . . .) \u221d (n (d)\u2212(d,i) k + \u03b1 (d) k ) n (w (d) i )\u2212(d,i) k + \u03b2 w (d) i n \u2212(d,i) k + w \u03b2 w \u00d7 d \u2032 :d\u2208C(d \u2032 ) Polya(z (d \u2032 ) |\u03b1 (d \u2032 ) : z (d) i = k, z \u2212(d,i) , l)",
"eq_num": "(4)"
}
],
"section": "Inference",
"sec_num": "4"
},
{
"text": "where the n k 's are the counts of the occurrences of topic k over all of the entries determined by the superscript. The \u2212(d, i) superscript indicates excluding the current assignment for z (d) i . The update equation is similar to the update equations of Griffiths and Steyvers, but with a different \u03b1 for each document d, and with multiplicative weights for each document that cites it. These weights Polya(z (d) |\u03b1 (d) ) are the likelihood for a multivariate Polya (a.k.a. Dirichlet-multinomial) distribution,",
"cite_spans": [
{
"start": 190,
"end": 193,
"text": "(d)",
"ref_id": null
},
{
"start": 418,
"end": 421,
"text": "(d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "Polya(z (d) |\u03b1 (d) ) = \u0393( k \u03b1 (d) k ) \u0393(n (d) + k \u03b1 (d) k ) k \u0393(n (d) k + \u03b1 (d) k ) \u0393(\u03b1 (d) k ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "In the case of TIR, in the collapsed model the full conditional posterior for the topical influence values l is P r(l|z, \u03bb) \u221d P r(z|l)P r(l|\u03bb). Here, P r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "(z|l) = D d=1 Polya(z (d) |l C (d) , z C (d) ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "The topical influence values l can be sampled using Metropolis-Hastings updates, or slice sampling. An alternative is to perform stochastic EM, optimizing the likelihood or the posterior probability of l, interleaved within the Gibbs sampler, as in Mimno and McCallum (2008) and . In experiments on synthetic data we found that maximum likelihood updates on l, obtained via gradient ascent, resulted in the lowest L1 error from the true l, so we use this strategy for the experimental results in this paper. The derivative of the log-likelihood with respect to the topical influence l",
"cite_spans": [
{
"start": 249,
"end": 274,
"text": "Mimno and McCallum (2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "(a) of article a is dP r(z|l) dl (a) = d:a\u2208C(d) \u03a8( k c\u2208C (d) l (c)z(c) k + K\u03b1) \u2212\u03a8( k c\u2208C (d) l (c)z(c) k + K\u03b1 + n (d) ) + d:a\u2208C(d) K k=1z (a) k \u03a8( c\u2208C (d) l (c)z(c) k + \u03b1 + n (d) k ) \u2212\u03a8( c\u2208C (d) l (c)z(c) k + \u03b1) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "where \u03a8(.) is the digamma function. For TIRE, the likelihood decomposes across documents and we can optimize the incoming edge weights for each document separately. We have dP r(z",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "(d) |l) dl (a,d) =\u03a8( k c\u2208C (d) l (c,d)z (c) k + K\u03b1) \u2212\u03a8( k c\u2208C (d) l (c,d)z (c) k + K\u03b1 + n (d) ) + K k=1z (a) k \u03a8( c\u2208C (d) l (c,d)z (c) k + \u03b1 + n (d) k ) \u2212\u03a8( c\u2208C (d) l (c,d)z (c) k + \u03b1) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "We optimize the node-level l's in TIRE via the least squares estimate (LSE), l (a) = 1 |{d:a\u2208C (a,d) .",
"cite_spans": [
{
"start": 95,
"end": 100,
"text": "(a,d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "(d) }| d:a\u2208C (d) l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "Although the LSE for the mean of a truncated Gaussian is biased, it is widely used as it is more robust than the MLE (A'Hearn, 2004).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "In this section we experimentally investigate the properties of TIR and TIRE. We consider two scientific corpora: a collection of 3286 of articles from the Association for Computational Linguistics (ACL) conference 2 (Radev et al., 2009) published between 1987 and 2011, and a corpus of articles from the Neural Information Processing Systems (NIPS) conference 3 containing 1740 articles from 1987 to 1999. The corpora both contained a small number (53, and 14, respectively) of citation graph loops due to insider knowledge of simultaneous publications. Some loops were removed by manual deletion of \"insider knowledge\" edges, and others were removed by deleting edges in the loop uniformly at random. For computational efficiency, we performed approximate Gibbs updates where we drop the multiplicative Polya likelihood terms in Equation 4. This corresponds to only transmitting influence information downward in the citation DAG, but not transmitting \"reverse influence\" information upwards. Preliminary experiments on synthetic data indicated that this did not significantly impact the ability of the model to recover the topical influence weights. As one might expect, LDA is already capable of inferring topic distributions which are good enough to perform the regression on, without fully exploiting the additional feedback from the regression. This algorithm has a similar running time to the standard collapsed Gibbs sampler for LDA, as the regression step is not a bottleneck.",
"cite_spans": [
{
"start": 215,
"end": 237,
"text": "2 (Radev et al., 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Analysis",
"sec_num": "5"
},
{
"text": "In all experiments, we set the hyper-parameters to \u03b1 = 0.1, \u03b2 = 0.1 and the \u03c3 parameter for the truncated Gaussian in TIRE to be 1. We interleaved regression steps every 10 Gibbs iterations. For exploratory data analysis experiments the models were trained for 500 burn-in iterations, and the samples from the final iterations were used for the analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Analysis",
"sec_num": "5"
},
{
"text": "It is not immediately obvious how to best validate an unsupervised model of citation influence. Ground truth is not well-defined and human evaluation requires extensive knowledge of the individual papers in the corpora. With this in mind, we explore how topical influence scores relate to document metadata, which serves as a proxy for ground truth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Validation using Metadata",
"sec_num": "5.1"
},
{
"text": "In many cases, if article c is repeatedly cited in the text of article d it may indicate that d builds heavily on c. We would therefore expect to see an association between repeated citations and edge-wise topical influence l (c,d) . For each of the 106 papers in the NIPS corpus with at least three distinct references, we counted the number of repeated citations for the most influential and least influential references according to the TIRE model (Figure 3) . Overall, the \"most influential\" references were cited 171 times in the text of their citing articles, while the \"least influential\" references were cited 128 times. Of the 45 articles where the counts were not tied, the most influential references had the higher citation counts 33 times. A sign test rejects the null hypothesis that the median difference in citation counts between least and most influential references is zero at \u03b1 = 0.05, with p-value \u2248 5 \u00d7 10 \u22124 . Self-citations, where at least one author is in common between cited and citing articles, are also informative ( Figure 4 ). Authors often build upon their own work, so we would expect self-citations to have higher edge-wise topical influence on average. For ACL the mean topical influence for a self citation edge is 2.80 and for a non-self citation is 1.40. For NIPS the means are 5.05 (self) and 3.15 (non-self). A two-sample t-test finds these differences are both significant at \u03b1 = 0.05.",
"cite_spans": [
{
"start": 226,
"end": 231,
"text": "(c,d)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 451,
"end": 461,
"text": "(Figure 3)",
"ref_id": "FIGREF3"
},
{
"start": 1046,
"end": 1054,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Model Validation using Metadata",
"sec_num": "5.1"
},
{
"text": "We also used a document prediction task to explore whether the posited latent structure is predictively useful. We selected roughly 10% of the articles in each corpus (170 and 330 documents for NIPS and ACL, respectively) for testing, chosen among the articles that made at least one citation. We held out a randomly selected set of 50% of their words and evaluated the log probability of the held out partial documents under each model. This is equivalent to evaluating on a set of new documents with the same set of references as the held out set. Evaluation was performed using annealed importance sampling (Neal, 2001) , as in Wallach et al. (2009) except we used multiple samples per likelihood computation.",
"cite_spans": [
{
"start": 610,
"end": 622,
"text": "(Neal, 2001)",
"ref_id": "BIBREF8"
},
{
"start": 631,
"end": 652,
"text": "Wallach et al. (2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Experiments",
"sec_num": "5.2"
},
{
"text": "The TIR models were compared to LDA and an \"additive\" version of DMR with link function \u03b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Experiments",
"sec_num": "5.2"
},
{
"text": "(d) k = x (d)\u22ba \u03bb k + \u03b1,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Experiments",
"sec_num": "5.2"
},
{
"text": "where the \u03bbs were constrained to be positive and given an exponential prior with mean one. For DMR, binary feature vectors encoded the presence or absence of each possible citation. For each algorithm, we burned in for 250 iterations, then executed 1000 iterations, optimizing topical influence weights/DMR parameters every 10th iteration. Held-out log probability scores were computed by performing AIS with every 100th sample, and averaging the results to estimate the posterior predictive probability P r(held out article|training set, citations, model).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Experiments",
"sec_num": "5.2"
},
{
"text": "It was found that all of the regression methods had superior predictive performance to LDA on these corpora, demonstrating that topical influence has predictive value (Table 1) . Although DMR performed slightly better than TIR predictively, TIR was competitive despite the fact that it has a factor of K less regression parameters. Note that DMR does not provide an interpretable notion of influence.",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 176,
"text": "(Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Prediction Experiments",
"sec_num": "5.2"
},
{
"text": "In this section we explore the inferred topical influence scores l (d) , total topical influence scores T (d) and edgewise topical influence scores l (c,d) (recall their definitions in Equations 1, 2 and 3, respectively). Table 2 shows the most influential articles in the ACL corpus, according to citation counts, topical influence and total topical influence (the latter two inferred with the TIR model). The most frequently cited paper within the ACL corpus, written by Papineni et al., introduces BLEU, a technique for evaluating machine translation (MT) systems. 4 This paper is of great importance to the computational linguistics community because the method that it introduces is widely used to validate MT systems. However, the BLEU article has a relatively low topical influence value of 0.58, consistent with the fact that most of the papers that cite it use the technique as part of their methodology but do not build upon its ideas. We emphasize that topical influence measures a specific dimension of scientific importance, namely the tendency of an article to influence the ideas (as mediated by the topics) of citing articles; papers with low topical influence such as the BLEU article may be important for other reasons.",
"cite_spans": [
{
"start": 67,
"end": 70,
"text": "(d)",
"ref_id": null
},
{
"start": 106,
"end": 109,
"text": "(d)",
"ref_id": null
},
{
"start": 150,
"end": 155,
"text": "(c,d)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 222,
"end": 229,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Exploring Topical Influence",
"sec_num": "5.3"
},
{
"text": "Ranking papers by their influence weights l (d) ( Table 2 , middle) has the opposite difficulty to ranking by citation counts -the papers with the highest topical influence were typically cited only once, by the same authors. This makes sense, given what the model is designed to do. The lone citing papers were certainly topically influenced by these articles.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Exploring Topical Influence",
"sec_num": "5.3"
},
{
"text": "A more useful metric, however, is the total topical influence T (d) (the bottom sub-table in Table 2 ). This is the total number of words of prior concentration, summed over all of its citers, that the article has contributed, and is a measure of the total corpus-wide topical influence of the paper. This metric ranks the BLEU paper at 5th place, down from 1st place by citation count. The ACL paper with the highest total topical influence, by David Chiang, won the ACL best paper award in 2005.",
"cite_spans": [
{
"start": 64,
"end": 67,
"text": "(d)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 68,
"end": 101,
"text": "(the bottom sub-table in Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Exploring Topical Influence",
"sec_num": "5.3"
},
{
"text": "The behavior of the different metrics is echoed in the NIPS corpus (Table 3) .",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 76,
"text": "(Table 3)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Exploring Topical Influence",
"sec_num": "5.3"
},
{
"text": "The most cited paper, \"Handwritten Digit Recognition,\" by Table 1 : Wins, losses and average improvement for log probabilities of held-out articles, versus LDA. Each \"Win\" corresponds to the model assigning a higher log probability score for the test portion of a held-out document than LDA assigned to that document.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Exploring Topical Influence",
"sec_num": "5.3"
},
{
"text": "Le Cun et al. (1990) , is an early successful application of neural networks. The paper does not introduce novel models or algorithms, but rather, in the authors' words, \"show[s] that large back propagation (BP) networks can be applied to real image recognition problems.\" Thus, although it is has an important role as a landmark neural network success story, it does not score highly in terms of topical influence. This paper is ranked 13th according to total topical influence, with a score of 1.6. The top tworanked papers according to total topical influence, on Gaussian Process Regression and POMDPs respectively, were both seminal papers that spawned large bodies of related work. An interesting case is the third-ranked paper in the NIPS corpus, by Wang et al., on the theory of early stopping. It is only referenced three times, but has a very high topical influence of 19.3 words. All three citing papers are also on the theory of early stopping, and one of the papers, by Wang and Venkatesh, directly extends a theoretical result of this paper. Although it is easy to see why this paper scores highly on topical influence, in this case the metric has perhaps overstated its importance. A limitation of topical influence is that it can potentially give more credit than is due when an article is cited by a small number of topically similar papers, due to overfitting. This is likely to be an issue for any topic-based approach for modeling scientific influence. However, topics help to absorb lexical ambiguity and author-specific idiosyncracies, mitigating the problem relative to word-based approaches.",
"cite_spans": [
{
"start": 3,
"end": 20,
"text": "Cun et al. (1990)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploring Topical Influence",
"sec_num": "5.3"
},
{
"text": "Using the TIRE model, we can also look at influence relationships between pairs of articles. Tables 4 and 5 show the most and least topically influential references, and the most and least influenced citing papers, for three example articles from ACL and NIPS, respectively. The model correctly assigns higher influence scores along the edges to and from relevant documents. For the ACL papers, the BLEU algorithm's article is inferred to have zero topical influence on Chiang's paper, consistent with its role : Most influential articles in the ACL Conference corpus, according to citation counts (top), topical influence l (d) inferred by TIR (middle), and total topical influence T (d) inferred by TIR (bottom). For total topical influence, the breakdown of T (d) = l (d) \u00d7 citation count is shown in parentheses.",
"cite_spans": [
{
"start": 625,
"end": 628,
"text": "(d)",
"ref_id": null
},
{
"start": 685,
"end": 688,
"text": "(d)",
"ref_id": null
},
{
"start": 771,
"end": 774,
"text": "(d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploring Topical Influence",
"sec_num": "5.3"
},
{
"text": "Handwritten Digit Recognition with a Back-Propagation Network. Y. Le Cun, et al. 19 Optimal The Cascade-Correlation Learning Architecture. S. Fahlman, C. Lebiere. in the paper as an evaluation technique. The paper most topically influenced by Chiang's paper, written by Yang and Zheng, aims to improve upon the ideas in that paper. In the NIPS corpus, the article by Bengio and Frasconi, on recurrent neural network architectures, extends previous work by the same authors, which is correctly assigned the highest topical influence. A particularly interesting case is the paper by Dayan and Hinton, which is heavily influenced by a paper by Moore, and in turn strongly influences a later paper by Moore, thus illustrating the interplay of scientific influence between authors along the citation graph. These three papers were on reinforcement learning, while the lowest scoring reference and citer were on other subjects.",
"cite_spans": [
{
"start": 66,
"end": 73,
"text": "Le Cun,",
"ref_id": null
},
{
"start": 74,
"end": 83,
"text": "et al. 19",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Top 5 Articles by Citation Count 26",
"sec_num": null
},
{
"text": "This paper introduced the notion of topical influence, a quantitative measure of scientific impact which arises from a latent variable model called topical influence regression. The model builds upon the ideas of Dirichlet-multinomial regression to encode influence relationships between articles along the citation graph. By training TIR, we can recover topical influence scores that give us insight into the impact of scientific articles. The model was applied to two scientific corpora, demonstrating the utility of the method both quantitatively and qualitatively. In future work, the proposed framework could readily be extended to model other aspects of scientific influence, such as the effects of authors and journals on topical influence, and to exploit the con-text in which citations occur. From an exploratory analysis perspective, it would be instructive to compare topical influence trajectories over time for different papers. This could be further facilitated by explicitly modeling the dynamics of each article's topical influence score. The TIR framework could potentially also be applicable to other application domains such as modeling how interpersonal influence affects the spread of memes via social media.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions / Discussion",
"sec_num": "6"
},
{
"text": "To complement TIR, it would be useful to also have systems for identifying articles which are important for alternative reasons, such as providing methodological tools and/or demonstrating important facts. Ultimately a suite of such tools could feed into a system such as Google Scholar or Citeseer. We envision that this line of work will also be useful for building visualization tools to help researchers explore scientific corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions / Discussion",
"sec_num": "6"
},
{
"text": "http://clair.eecs.umich.edu/aan/ 3 http://www.arbylon.net/resources.html, published by Gregor Heinrich and based on an earlier collection due to Sam Roweis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Citations within the corpora are of course only a small fraction of the total set of citations for many of these papers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center contract number D11PC20155. The U.S. government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the U.S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A restricted maximum likelihood estimator for truncated height samples",
"authors": [
{
"first": "]",
"middle": [
"B"
],
"last": "A'hearn2004",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "A'hearn",
"suffix": ""
}
],
"year": 2004,
"venue": "Economics & Human Biology",
"volume": "2",
"issue": "1",
"pages": "5--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A'Hearn2004] B. A'Hearn. 2004. A restricted max- imum likelihood estimator for truncated height sam- ples. Economics & Human Biology, 2(1):5-19.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "[",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2003,
"venue": "The Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Blei et al.2003] D.M. Blei, A.Y. Ng, and M.I. Jordan. 2003. Latent Dirichlet allocation. The Journal of Ma- chine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The anatomy of a large-scale hypertextual web search engine. Computer networks and ISDN systems",
"authors": [
{
"first": "]",
"middle": [
"S"
],
"last": "Page1998",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Brin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Page",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "30",
"issue": "",
"pages": "107--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Page1998] S. Brin and L. Page. 1998. The anatomy of a large-scale hypertextual web search en- gine. Computer networks and ISDN systems, 30(1- 7):107-117.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The missing link-a probabilistic model of document content and hypertext connectivity",
"authors": [
{
"first": "]",
"middle": [
"J"
],
"last": "Blei2009",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "; D",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2001,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "430--436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Chang and Blei2009] J. Chang and D. Blei. 2009. Rela- tional topic models for document networks. In Artifi- cial Intelligence and Statistics, pages 81-88. [Cohn and Hofmann2001] D. Cohn and T. Hofmann. 2001. The missing link-a probabilistic model of docu- ment content and hypertext connectivity. In Advances in Neural Information Processing Systems, pages 430- 436.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised prediction of citation influences",
"authors": [
{
"first": "[",
"middle": [],
"last": "Dietz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 24th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "233--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Dietz et al.2007] L. Dietz, S. Bickel, and T. Scheffer. 2007. Unsupervised prediction of citation influences. In Proceedings of the 24th International Conference on Machine Learning, pages 233-240.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Detecting topic evolution in scientific literature: how can citations help?",
"authors": [
{
"first": "S",
"middle": [],
"last": "Gerrish",
"suffix": ""
},
{
"first": "D",
"middle": [
"M T L"
],
"last": "Blei",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 18th ACM Conference on Information and Knowledge Management",
"volume": "101",
"issue": "",
"pages": "957--966",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Gerrish and Blei2010] S. Gerrish and D.M. Blei. 2010. A language-based approach to measuring scholarly impact. In Proceedings of the 26th International Con- ference on Machine Learning, pages 375-382. [Griffiths and Steyvers2004] T.L. Griffiths and M. Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences of the United States of America, 101(Suppl 1):5228. [He et al.2009] Q. He, B. Chen, J. Pei, B. Qiu, P. Mitra, and L. Giles. 2009. Detecting topic evolution in sci- entific literature: how can citations help? In Proceed- ings of the 18th ACM Conference on Information and Knowledge Management, pages 957-966. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Handwritten digit recognition with a back-propagation network",
"authors": [
{
"first": "Le",
"middle": [],
"last": "Cun",
"suffix": ""
}
],
"year": 1990,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "396--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le Cun et al.1990] B.B. Le Cun, JS Denker, D. Hender- son, RE Howard, W. Hubbard, and LD Jackel. 1990. Handwritten digit recognition with a back-propagation network. In Advances in Neural Information Process- ing Systems, pages 396-404.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Topicflow model: Unsupervised learning of topic-specific influences of hyperlinked documents",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lin ; D. Mimno",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum ; R. Nallapati",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mcfarland",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Pagerank without hyperlinks: Reranking with pubmed related article networks for biomedical text retrieval. BMC bioinformatics",
"volume": "9",
"issue": "",
"pages": "543--551",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lin. 2008. Pagerank without hyper- links: Reranking with pubmed related article networks for biomedical text retrieval. BMC bioinformatics, 9(1):270. [Mimno and McCallum2008] D. Mimno and A. McCal- lum. 2008. Topic models conditioned on arbitrary features with Dirichlet-multinomial regression. In Un- certainty in Artificial Intelligence, pages 411-418. [Nallapati et al.2011] R. Nallapati, D. McFarland, and C. Manning. 2011. Topicflow model: Unsupervised learning of topic-specific influences of hyperlinked documents. In International Conference on Artificial Intelligence and Statistics, pages 543-551.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Identifying the original contribution of a document via language modeling",
"authors": [
{
"first": "R",
"middle": [
"M"
],
"last": "Neal ; Radev",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings, ACL Workshop on Natural Language Processing and Information Retrieval for Digital Libraries",
"volume": "11",
"issue": "",
"pages": "1105--1112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.M. Neal. 2001. Annealed importance sam- pling. Statistics and Computing, 11(2):125-139. [Radev et al.2009] D. R. Radev, P. Muthukrishnan, and V. Qazvinian. 2009. The ACL anthology network cor- pus. In Proceedings, ACL Workshop on Natural Lan- guage Processing and Information Retrieval for Digi- tal Libraries, pages 54-61, Singapore. [Shaparenko and Joachims2009] B. Shaparenko and T. Joachims. 2009. Identifying the original con- tribution of a document via language modeling. In Machine Learning and Knowledge Discovery in Databases, pages 350-365. Springer. [Teufel et al.2006] S. Teufel, A. Siddharthan, and D. Tid- har. 2006. Automatic classification of citation func- tion. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 103-110. Association for Computational Lin- guistics. [Wallach et al.2009] H.M. Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. 2009. Evalu- ation methods for topic models. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1105-1112. ACM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Topic modeling: beyond bag-of-words",
"authors": [
{
"first": "H",
"middle": [
"M"
],
"last": "Wallach",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "977--984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H.M. Wallach. 2006. Topic modeling: beyond bag-of-words. In Proceedings of the 23rd In- ternational Conference on Machine Learning, pages 977-984. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Public knowledge: an essay concerning the social dimension of science",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Ziman",
"suffix": ""
}
],
"year": 1968,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.M. Ziman. 1968. Public knowledge: an essay concerning the social dimension of science. Cambridge University Press.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "The graphical model for the portion of the TIR model connected to article a (the links from the z's and l's to the \u03b1(d) 's are deterministic). model) for document d as being drawn from a Polya urn scheme with \u03b1 (d)",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "(a) An example citation network. (b) Graphical model for TIR on the example network, collapsing out \u0398 but retaining topics \u03a6. Influence variables and hyperparameters not shown for simplicity.",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Topical influence per edge versus number of times cited by the citing article (NIPS). Several articles had zero in-text citations due to author or dataset errors.",
"num": null,
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Topical influence for self and non-self citation edges. Left: ACL. Right: NIPS.",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>Top 5 Articles by Citation Count</td></tr><tr><td colspan=\"2\">140 BLEU: a 11.37 Bayesian Learning of Non-compositional Phrases with Synchronous Parsing. H. Zhang, C. Quirk, R. Moore, D. Gildea.</td></tr><tr><td>10.48</td><td>A Plan Recognition Model for Clarification Subdialogues. D. Litman, J. Allen.</td></tr><tr><td>10.38</td><td>PCFGs with Syntactic and Prosodic Indicators of Speech Repairs. J. Hale et al.</td></tr><tr><td>10.30</td><td>Referring as Requesting, P. Cohen</td></tr><tr><td/><td>Top 5 Articles by Total Topical Influence</td></tr><tr><td>111.46 (1.74 \u00d7 64)</td><td>A Hierarchical Phrase-Based Model for Statistical Machine Translation. D. Chiang.</td></tr><tr><td>101.12 (6.74 \u00d7 15)</td><td>Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation. D. Xiong, Q. Liu, S. Lin.</td></tr><tr><td>98.56 (5.80 \u00d7 17)</td><td>A Logical Semantics for Feature Structures. R. Kasper, W. Rounds.</td></tr><tr><td>85.15 (2.18 \u00d7 39)</td><td>Discriminative Training and Maximum Entropy Models for Statistical Machine Translation. F. Och, H. Ney</td></tr><tr><td>81.82 (0.58 \u00d7 140)</td><td>BLEU: a Method for Automatic Evaluation of Machine Translation, K. Papineni, S. Roukos, T. Ward, and W. Zhu.</td></tr></table>",
"text": "Method for Automatic Evaluation of Machine Translation. K. Papineni, S. Roukos, T. Ward, W. Zhu. 105 Minimum Error Rate Training in Statistical Machine Translation. F. Och. 64 A Hierarchical Phrase-Based Model for Statistical Machine Translation. D. Chiang. 64 Accurate Unlexicalized Parsing. D. Klein, C. Manning. 59 Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. D. Yarowsky. Top 5 articles by Topical Influence 11.38 Refining Event Extraction through Cross-document Inference. H. Ji, R. Grishman.",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">A Hierarchical Phrase-Based Model for Statistical Machine Translation. D. Chiang.</td></tr><tr><td>Most influential reference</td><td>1.48</td><td>Discriminative Training and Maximum Entropy Models for Statistical Machine Translation. F. Och and H. Ney.</td></tr><tr><td>Least influential reference</td><td>0.00</td><td>BLEU: a Method for Automatic Evaluation of Machine Translation. K. Papineni, S. Roukos, T. Ward, W. Zhu.</td></tr><tr><td>Most influenced citer</td><td>2.54</td><td>Toward Smaller, Faster, and Better Hierarchical Phrase-based SMT. M. Yang, J. Zheng.</td></tr><tr><td>Least influenced citer</td><td>0.60</td><td>An Optimal-time Binarization Algorithm for Linear Context-Free Rewriting Systems with Fan-out Two.</td></tr><tr><td/><td/><td>C. Gmez-Rodrguez, G. Satta.</td></tr><tr><td>Most influential reference</td><td>2.52</td><td>Subject-dependent Co-occurrence and Word Sense Disambiguation. J. Guthrie, L. Guthrie, Y. Wilks, H. Aidinejad.</td></tr><tr><td>Least influential reference</td><td>0.53</td><td>Word-sense Disambiguation using Statistical Methods. P. Brown, S. Della Pietra, V. Della Pietra, R. Mercer.</td></tr><tr><td>Most influenced citer</td><td>1.81</td><td>Discriminating Image Senses by Clustering with Multimodal Features. N. Loeff, C. Alm, D. Forsyth.</td></tr><tr><td>Least influenced citer</td><td>0.00</td><td>Semi-supervised Convex Training for Dependency Parsing. Q. Wang, D. Schuurmans, D. Lin.</td></tr><tr><td/><td/><td>Accurate Unlexicalized Parsing. D. Klein, C. Manning.</td></tr><tr><td>Most influential reference</td><td>3.87</td><td>Parsing with Treebank Grammars: Empirical Bounds, Theoretical Models, and the Structure of the Penn Treebank.</td></tr><tr><td/><td/><td>D. Klein and C. Manning.</td></tr><tr><td>Least influential reference</td><td>0.81</td><td>Efficient Parsing for Bilexical Context-Free Grammars and Head Automaton Grammars. J. Eisner, G. Satta.</td></tr><tr><td>Most influenced citer</td><td>1.67</td><td>Evaluating the Accuracy of an Unlexicalized Statistical Parser on the PARC DepBank. T. Briscoe, J. Carroll.</td></tr><tr><td>Least influenced citer</td><td>0.00</td><td/></tr></table>",
"text": "Most influential articles in the NIPS corpus, according to citation counts (top), topical influence l(d) inferred by TIR (middle), and total topical influence T(d) inferred by TIR (bottom). Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. D. Yarowsky.",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td/><td>Feudal Reinforcement Learning. P. Dayan, G. Hinton</td></tr><tr><td colspan=\"3\">Most influential reference Memory-based Least influential reference 5.47 0.00 A Delay-Least influential reference 0.15 Skeletonization: A Most influenced citer 3.08 Structural Risk Minimization for Character Recognition. I. Guyon, V. Vapnik, B. Boser, L. Bottou, S. Solla.</td></tr><tr><td colspan=\"3\">Least influenced citer Structural and Most influential reference 0.64 5.29 Credit Assignment through Time: Alternatives to Backpropagation. Y. Bengio, P. Frasconi.</td></tr><tr><td>Least influential reference</td><td>0.00</td><td>Induction of Multiscale Temporal Structure. M. Mozer</td></tr><tr><td>Most influenced citer</td><td>2.66</td><td>Learning Fine Motion by Markov Mixtures of Experts. M. Meila, M. Jordan.</td></tr><tr><td>Least influenced citer</td><td>1.47</td><td>Recursive Estimation of Dynamic Modular RBF Networks. V. Kadirkamanathan, M. Kadirkamanathan.</td></tr></table>",
"text": "Least and most influential references and citers, and the influence weights along these edges, inferred by the TIRE model for three example ACL articles.Reinforcement Learning: Efficient Computation with Prioritized Sweeping. A. Moore, C. Atkeson. Line Based Motion Detection Chip. T. Horiuchi, J. Lazzaro, A. Moore, C. Koch. Most influenced citer 3.36 The Parti-Game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-Spaces. A. Moore. Least influenced citer 1.71 Multi-time Models for Temporally Abstract Planning. D. Precup, R. Sutton. Optimal Brain Damage. Y. Le Cun, J. Denker , S. Solla. Most influential reference 2.82 Comparing Biases for Minimal Network Construction with Back-Propagation. S. Hanson, L. Pratt. Technique for Trimming the Fat from a Network via Relevance Assessment. M. Mozer, P. Smolensky. Behavioral Evolution of Recurrent Networks. G. Saunders, P. Angeline, J. Pollack. An Input Output HMM Architecture. Y. Bengio, P. Frasconi.",
"html": null
},
"TABREF6": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Least and most influential references and citers, and the influence weights along these edges, inferred by the TIRE model for three example NIPS articles.",
"html": null
}
}
}
}