| { |
| "paper_id": "D12-1020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:22:41.073862Z" |
| }, |
| "title": "A Phrase-Discovering Topic Model Using Hierarchical Pitman-Yor Processes", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [ |
| "V" |
| ], |
| "last": "Lindsey", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "robert.lindsey@colorado.edu" |
| }, |
| { |
| "first": "William", |
| "middle": [ |
| "P" |
| ], |
| "last": "Headden", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "headdenw@twocassowaries.com" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "J" |
| ], |
| "last": "Stipicevic", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Topic models traditionally rely on the bagof-words assumption. In data mining applications, this often results in end-users being presented with inscrutable lists of topical unigrams, single words inferred as representative of their topics. In this article, we present a hierarchical generative probabilistic model of topical phrases. The model simultaneously infers the location, length, and topic of phrases within a corpus and relaxes the bagof-words assumption within phrases by using a hierarchy of Pitman-Yor processes. We use Markov chain Monte Carlo techniques for approximate inference in the model and perform slice sampling to learn its hyperparameters. We show via an experiment on human subjects that our model finds substantially better, more interpretable topical phrases than do competing models.", |
| "pdf_parse": { |
| "paper_id": "D12-1020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Topic models traditionally rely on the bagof-words assumption. In data mining applications, this often results in end-users being presented with inscrutable lists of topical unigrams, single words inferred as representative of their topics. In this article, we present a hierarchical generative probabilistic model of topical phrases. The model simultaneously infers the location, length, and topic of phrases within a corpus and relaxes the bagof-words assumption within phrases by using a hierarchy of Pitman-Yor processes. We use Markov chain Monte Carlo techniques for approximate inference in the model and perform slice sampling to learn its hyperparameters. We show via an experiment on human subjects that our model finds substantially better, more interpretable topical phrases than do competing models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Probabilistic topic models have been the focus of intense study in recent years. The archetypal topic model, Latent Dirichlet Allocation (LDA), posits that words within a document are conditionally independent given their topic (Blei et al., 2003) . This \"bag-of-words\" assumption is a common simplification in which word order is ignored, but one which introduces undesirable properties into a model meant to serve as an unsupervised exploratory tool for data analysis.", |
| "cite_spans": [ |
| { |
| "start": 228, |
| "end": 247, |
| "text": "(Blei et al., 2003)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "When an end-user runs a topic model, the output he or she is often interested in is a list of topical unigrams, words probable in a topic (hence, representative of it). In many situations, such as during the use of the topic model for the analysis of a new or ill-understood corpus, these lists can be insufficiently informative. For instance, if a layperson ran LDA on the NIPS corpus, he would likely get a topic whose most prominent words include policy, value, and reward. Seeing these words isolated from their context in a list would not be particularly insightful to the layperson unfamiliar with computer science research. An alternative to LDA which produced richer output like policy iteration algorithm, value function, and model-based reinforcement learning alongside the unigrams would be much more enlightening. Most situations where a topic model is actually useful for data exploration require a model whose output is rich enough to dispel the need for the user's extensive prior knowledge of the data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Furthermore, lists of topical unigrams are often made only marginally interpretable by virtue of their non-compositionality, the principle that a collocation's meaning typically is not derivable from its constituent words (Schone and Jurafsky, 2001) . For example, the meaning of compact disc as a music medium comes from neither the unigram compact nor the unigram disc, but emerges from the bigram as a whole. Moreover, non-compositionality is topic dependent; compact disc should be interpreted as a music medium in a music topic, and as a small region bounded by a circle in a mathematical topic. LDA is prone to decompose collocations into different topics and violate the principle of noncompositionality, and its unigram lists are harder to interpret as a result.", |
| "cite_spans": [ |
| { |
| "start": 222, |
| "end": 249, |
| "text": "(Schone and Jurafsky, 2001)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We present an extension of LDA called Phrase-Discovering LDA (PDLDA) that satisfies two desiderata: providing rich, interpretable output and honoring the non-compositionality of collocations. PDLDA is built in the tradition of the \"Topical N-Gram\" (TNG) model of Wang et al. (2007) . TNG is a topic model which satisfies the first desideratum by producing lists of representative, topically cohesive n-grams of the form shown in Figure 1 . We diverge from TNG by our addressing the second desideratum, and we do so through a more straightforward and intuitive definition of what constitutes a phrase and its topic. In the furtherance of our goals, we employ a hierarchical method of modeling phrases that uses dependent Pitman-Yor processes to ameliorate overfitting. Pitman-Yor processes have been successfully used in the past in n-gram (Teh, 2006) and LDA-based models (Wallach, 2006) for creating Bayesian language models which exploit word order, and they prove equally useful in this scenario of exploiting both word order and topics.", |
| "cite_spans": [ |
| { |
| "start": 263, |
| "end": 281, |
| "text": "Wang et al. (2007)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 839, |
| "end": 850, |
| "text": "(Teh, 2006)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 855, |
| "end": 887, |
| "text": "LDA-based models (Wallach, 2006)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 429, |
| "end": 437, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This article is organized as follows: after describing TNG, we discuss PDLDA and how PDLDA addresses the limitations of TNG. We then provide details of our inference procedures and evaluate our model against competing models on a subset of the TREC AP corpus (Harman, 1992) in an experiment on human subjects which assesses the interpretability of topical n-gram lists. The experiment is premised on the notion that topic models should be evaluated through a real-world task instead of through information-theoretic measures which often negatively correlate with topic quality (Chang et al., 2009) .", |
| "cite_spans": [ |
| { |
| "start": 259, |
| "end": 273, |
| "text": "(Harman, 1992)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 577, |
| "end": 597, |
| "text": "(Chang et al., 2009)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "2 Background: LDA and TNG LDA represents documents as probabilistic mixtures of latent topics. Each word w in a corpus w is drawn from a distribution \u03c6 indexed by a topic z, where z is drawn from a distribution \u03b8 indexed by its document d. The formal definition of LDA is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u03b8 d \u223c Dirichlet (\u03b1) z i | d, \u03b8 \u223c Discrete (\u03b8 d ) \u03c6 z \u223c Dirichlet (\u03b2) w i | z i , \u03c6 \u223c Discrete (\u03c6 z i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "where \u03b8 d is document d's topic distribution, \u03c6 z is topic z's distribution over words, z i is the topic assignment of the ith token, and w i is the ith word. \u03b1 and \u03b2 are hyperparameters to the Dirichlet priors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Here and throughout the article, we use a bold font for vector notation: for example, z is the vector of all topic assignments, and its ith entry, z i , corresponds to the topic assignment of the ith token in the corpus. TNG extends LDA to model n-grams of arbitrary length in order to create the kind of rich output for text mining discussed in the introduction. It does this by representing a joint distribution P (z, c|w) where each c i is a Boolean variable that signals the start of a new n-gram beginning at the ith token. c partitions a corpus into consecutive non-overlapping n-grams of various lengths. Formally, TNG differs from LDA by the distributional assumptions", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "w i | w i\u22121 , z i , c i = 1, \u03c6 \u223c Discrete(\u03c6 z i ) w i | w i\u22121 , z i , c i = 0, \u03c3 \u223c Discrete(\u03c3 z i w i\u22121 ) c i | w i\u22121 , z i\u22121 , \u03c0 \u223c Bernoulli(\u03c0 z i\u22121 w i\u22121 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "where the new distributions \u03c0 zw and \u03c3 zw are endowed with conjugate prior distributions: \u03c0 zw \u223c Beta(\u03bb) and \u03c3 zw \u223c Dirichlet(\u03b4). When c i = 0, word w i is joined into a topic-specific bigram with w i\u22121 . When c i = 1, w i is drawn from a topicspecific unigram distribution and is the start of a new n-gram.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "An unusual feature of TNG is that words within a topical n-gram, a sequence of words delineated by c, do not share the same topic. To compensate for this after running a Gibbs sampler, Wang et al. (2007) analyze each topical n-gram post hoc as if the topic of the final word in the n-gram was the topic assignment of the entire n-gram. Though this design simplifies inference, we perceive it as a shortcoming since the aforementioned principle of non-compositionality supports the intuitive idea that each collocation ought to be drawn from a single topic. Another potential drawback of TNG is that the topic-specific bigram distributions \u03c3 zw share no probability mass between each other or with the unigram distributions \u03c6 z . Hence, observing a bigram under one topic does not make it more likely under another topic or make its constituent unigrams more probable. To be more concrete, in TNG, observing space shuttle under a topic z (or under two topics, one for each word) regrettably does not make space shuttle more likely under a topic z = z, nor does it make observing shuttle more likely under any topic. Smoothing, the sharing of probability mass between ", |
| "cite_spans": [ |
| { |
| "start": 185, |
| "end": 203, |
| "text": "Wang et al. (2007)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "... z i-1 z i z i+1 w i-1 w i+1 c i c i+1 ... ... ... G \u03b8 \u03c0 \u03b1 \u03bb w i D T u a b \u03c1 \u03b5 ... ... V |u| Figure 2: PDLDA drawn in plate notation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "contexts, is desirable so that a model like this does not need to independently infer the probability of every bigram under every topic. The advantages of smoothing are especially pronounced for small corpora or for a large number of topics. In these situations, the observed number of bigrams in a given topic will necessarily be very small and thus not support strong inferences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A more natural definition of a topical phrase, one which meets our second desideratum, is to have each phrase possess a single topic. We adopt this intuitive idea in PDLDA. It can also be understood through the lens of Bayesian changepoint detection. Changepoint detection is used in time series models in which the generative parameters periodically change abruptly (Adams and MacKay, 2007) . Viewing a sentence as a time series of words, we posit that the generative parameter, the topic, changes period-ically in accordance with the changepoint indicators c. Because there is no restriction on the number of words between changepoints, topical phrases can be arbitrarily long but will always have a single topic drawn from \u03b8 d . The full definition of PDLDA is given by", |
| "cite_spans": [ |
| { |
| "start": 367, |
| "end": 391, |
| "text": "(Adams and MacKay, 2007)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PDLDA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "w i | u \u223c Discrete(G u ) G u \u223c PYP(a |u| , b |u| , G \u03c0(u) ) G \u2205 \u223c PYP(a 0 , b 0 , H) z i | d, z i\u22121 , \u03b8 d , c i \u223c \u03b4 z i\u22121 if c i = 0 Discrete (\u03b8 d ) if c i = 1 c i | w i\u22121 , z i\u22121 , \u03c0 \u223c Bernoulli \u03c0 w i\u22121 z i\u22121", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PDLDA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "with the prior distriutions over the parameters as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PDLDA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u03b8 d \u223c Dirichlet (\u03b1) \u03c0 zw \u223c Beta (\u03bb) a |u| \u223c Beta (\u03c1) b |u| \u223c Gamma ( )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PDLDA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Like TNG, PDLDA assumes that the probability of a changepoint c i+1 after the ith token depends on the current topic z i and word w i . This causes the length of a phrase to depend on its topic and constituent words. The changepoints explicitly model which words tend to start and end phrases in each document. Depending on c i , z i is either set deterministically to the preceding topic (when c i = 0) or is drawn anew from \u03b8 d (when c i = 1). In this way, each topical phrase has a single topic drawn from its document's topic distribution. As in TNG, the parameters \u03c0 zw and \u03b8 d are given conjugate priors parameterized by \u03bb and \u03b1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PDLDA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Let u be a context vector consisting of the phrase topic and the past m words:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PDLDA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "u < z i , w i\u22121 , w i\u22122 , . . . , w i\u2212m >.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PDLDA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The operator \u03c0(u) denotes the prefix of u, the vector with the rightmost element of u removed. |u| denotes the length of u, and \u2205 represents an empty context. For practical reasons, we pad u with a special start symbol when the context overlaps a phrase boundary. For example, the first word w i of a phrase beginning at a position i necessarily has c i = 1; consequently, all the preceding words w i\u2212j in the context vector are treated as start symbols so that w i is effectively drawn from a topic-specific unigram distribution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PDLDA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In PDLDA, each token is drawn from a distribution conditioned on its context u. When m = 1, this conditioning is analogous to TNG's word distribution. However, in contrast with TNG, the word Figure 3 : Illustration of the hierarchical Pitman-Yor process for a toy two-word vocabulary V = {honda, civic} and two-topic (T = 2) model with m = 1. Each node G in the tree is a Pitman-Yor process whose base distribution is its parent node, and H is a uniform distribution over V . When, for example, the context is u = z 1 : honda, the darkened path is followed and the probability of the next word is calculated from the shaded node using Equation 1, which combines predictions from all the nodes along the darkened path.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 191, |
| "end": 199, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "PDLDA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "H G z 1 G z 1 :honda G \u00d8 z 1 honda G z 1 :civic civic G z 2 G z 2 :honda z 2 honda G z 2 :civic civic", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PDLDA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "distributions used are Pitman-Yor processes (PYPs) linked together into a tree structure. This hierarchical construction creates the desired smoothing among different contexts. The next section explains this hierarchical distribution in more detail.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PDLDA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Words in PDLDA are emitted from G u , which has a PYP prior (Pitman and Yor, 1997) . PYPs are a generalization of the Dirichlet Process, with the addition of a discount parameter 0 \u2264 a \u2264 1. When considering the distribution of a sequence of words w drawn iid from a PYP-distributed G, one can analytically marginalize G and consider the resulting conditional distribution of w given its parameters a, b, and base distribution \u03c6. This marginal can best be understood by considering the distribution of any w i |w 1 , . . . , w i\u22121 , a, b, \u03c6, which is characterized by a generative process known as the generalized Chinese Restaurant Process (CRP) (Pitman, 2002) . In the CRP metaphor, one imagines a restaurant with an unbounded number of tables, where each table has one shared dish (a draw from \u03c6) and can seat an unlimited number of customers. The CRP specifies a process by which customers entering the restaurant choose a table to sit at and, consequently, the dish they eat. The first customer to arrive always sits at the first table. Subsequent customers sit at an occupied table k with probability proportional to c k \u2212 a and choose a new unoccupied table with probability proportional to b + ta, where c k is the number of customers seated at table k and t is the number of occupied tables in G. For our language modeling purposes, \"customers\" are word tokens and \"dishes\" are word types.", |
| "cite_spans": [ |
| { |
| "start": 60, |
| "end": 82, |
| "text": "(Pitman and Yor, 1997)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 646, |
| "end": 660, |
| "text": "(Pitman, 2002)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical Pitman-Yor process", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The hierarchical PYP (HPYP) is an intuitive recursive formulation of the PYP in which the base distribution \u03c6 is itself PYP-distributed. Figure 3 demonstrates this principle as applied to PDLDA. The hierarchy forms a tree structure, where leaves are restaurants corresponding to full contexts and internal nodes correspond to partial contexts. An edge between a parent and child node represents a dependency of the child on the parent, where the base distribution of the child node is its parent. This smooths each context's distribution like the Bayesian n-gram model of Teh (2006) , which is a Bayesian version of interpolated Kneser-Ney smoothing (Chen and Goodman, 1998). One ramification of this setup is that if a word occurs in a context u, the sharing makes it more likely in other contexts that have something in common with u, such as a shared topic or word.", |
| "cite_spans": [ |
| { |
| "start": 572, |
| "end": 582, |
| "text": "Teh (2006)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 137, |
| "end": 145, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Hierarchical Pitman-Yor process", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The HPYP gives the following probability for a word following the context u being w:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical Pitman-Yor process", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P u (w | \u03c4, a, b) = c uw\u2022 \u2212 a |u| t uw b |u| + c u\u2022\u2022 + b |u| + a |u| t u\u2022 b |u| + c u\u2022\u2022 P \u03c0(u) (w | \u03c4, a, b)", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Hierarchical Pitman-Yor process", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where P \u03c0(\u2205) (w|\u03c4, a, b) = G \u2205 (w), c uw\u2022 is the number of customers eating dish w in restaurant u, and t uw is the number of tables serving w in restaurant u, and \u03c4 represents the current seating arrangement. Here and throughout the rest of the paper, we use a dot to indicate marginal counts: e.g., c uw\u2022 = k c uwk where c uwk is the number of customers eating w in u at table k. The base distribution of G \u2205 was chosen to be uniform: H(w) = 1/V with V being the vocabulary size. The above equation an interpolation between distributions of context lengths |u|, |u| \u2212 1, . . . 0 and realizes the sharing of statistical strength between different contexts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical Pitman-Yor process", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In this section, we describe Markov chain Monte Carlo procedures to sample from P (z, c, \u03c4 |w, U ), the posterior distribution over topic assignments z, phrase boundaries c, and seating arrangements \u03c4 given an observed corpus w. Let U be shorthand for \u03b1, \u03bb, a, b. In order to draw samples from P (z, c, \u03c4 |w, U ), we employ a Metropolis-Hastings sampler for approximate inference. The sampler we use is a collapsed sampler (Griffiths and Steyvers, 2004) , wherein \u03b8, \u03c6, and G are analytically marginalized. Because we marginalize each G, we use the Chinese Restaurant Franchise representation of the hierarchical PYPs (Teh, 2006) . However, rather than onerously storing the table assignment of every token in w, we store only the counts of how many tables there are in a restaurant and how many customers are sitting at each table in that restaurant. We refer the inquisitive reader to the appendix of Teh (2006) for further details of this procedure.", |
| "cite_spans": [ |
| { |
| "start": 423, |
| "end": 453, |
| "text": "(Griffiths and Steyvers, 2004)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 618, |
| "end": 629, |
| "text": "(Teh, 2006)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 903, |
| "end": 913, |
| "text": "Teh (2006)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Our sampling strategy for a given token i in document d is to jointly propose changes to the changepoint c i and topic assignment z i , and then to the seating arrangement \u03c4 . Recall that according to the model, if c i = 0, z i = z i\u22121 ; otherwise z i is generated from the topic distribution for document d. Since the topic assignment remains the same until a new changepoint at a position i is reached, each token w j for j from position i until i \u2212 1 will depend on z i because for these j, z j = z i . We call this set of tokens the phrase suffix of the ith token and denote it s(i). More formally, let s(i) be the maximal set of continuous indices j \u2265 i including i such that, if j = i, c j = 0. That is, s(i) are the indices comprising the remainder of the phrase beginning at position i. In addition, let x(i) indicate the extended suffix version of s(i) which includes one additional index:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "x(i) {s(i) \u222a {max (s(i)) + 1}}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In addition to the words in the suffix s(i), the changepoint indicator variables c j for j in x(i) are also conditioned on z i . To make these dependencies more explicit, we refer to z s(i) z j \u2200j \u2208 s(i), which are constrained by the model to share a topic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The variables that depend directly on", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "z i , c i are z s(i) , w s(i) , c x(i) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The proposal distribution first draws from a multinomial over T + 1 options: one option for c i = 0, z i = z i \u2212 1; and one for c i = 1 paired with each possible z i = z \u2208 1 . . . T . This is given by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "P (z s(i) , c i | z \u00acs(i) , c \u00aci , \u03c4 \u00acs(i) , w, U ) \u221d j\u2208x(i) n \u00acx(j) z j\u22121 w j\u22121 c j + \u03bb c j n \u00acx(j) z j\u22121 w j\u22121 \u2022 + \u03bb 0 + \u03bb 1 j\u2208s(i) P (z j | c, z \u00acs(j) , U ) P u j (w j | \u03c4 \u00acs(i) , U ) with P (z j | c, z \u00acs(j) , U ) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 n \u00acs(j) dz j + \u03b1 n \u00acs(j) d\u2022 + T \u03b1 if c j = 1 \u03b4 z j ,z j\u22121 if c j = 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "P u j (w j | \u03c4 \u00acs(i) , U ) is given by Equation 1,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "T is the number of topics, n", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u00acs(j) dz", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "is the number of phrases in document d that have topic z when s(j)'s assignment is excluded, and n", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u00acs(j)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "zwc is the number of times a changepoint c has followed a word w with topic z when s(j)'s assignments are excluded.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "After drawing a proposal for c i , z s(i) for token i, the sampler adds a customer eating w i to a table serving w i in restaurant u i . An old table k is selected with probability \u221d max(0, c uwk \u2212 a |u| ) and a new table is selected with probability \u221d (b", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "|u i | + a |u i | t u i \u2022 )P \u03c0(u) (w i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Let z s(i) , c i , \u03c4 s(i) denote the proposed change to z s(i) , c i , \u03c4 s(i) . We accept the proposal with probability min(A, 1) where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "A =P (z s(i) , c i , \u03c4 s(i) ) Q(z s(i) , c i , \u03c4 s(i) ) P (z s(i) , c i , \u03c4 s(i) ) Q(z s(i) , c i , \u03c4 s(i) )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where Q is the proposal distribution andP is the true unnormalized distribution.P differs from Q in that the probability of each word w j and the seating arrangement depends only on \u00acs(j), as opposed to the simplification of using \u00acs(i). Almost all proposals are accepted; hence, this theoretically motivated Metropolis Hastings correction step makes little difference in practice. Because the parameters a and b have no intuitive interpretation and we lack any strong belief about what they should be, we give them vague priors where \u03c1 1 = \u03c1 2 = 1 and 1 = 10, 2 = .1. We then interleave a slice sampling algorithm (Neal, 2000) between sweeps of the Metropolis-Hastings sampler to learn these parameters. We chose not to do inference on \u03b1 in order to make the tests of our model against TNG more equitable.", |
| "cite_spans": [ |
| { |
| "start": 615, |
| "end": 627, |
| "text": "(Neal, 2000)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "An integral part of modeling topical phrases is the relaxation of the bag-of-words assumption in LDA. There are many models that make this relaxation. Among them, Griffiths and Steyvers (2005) present a model in which words are generated either conditioned on a topic or conditioned on the previous word in a bigram, but not both. They use this to model human performance on a word-association task. Wallach 2006experiments with incorporating LDA into a bigram language model. Her model uses a hierarchical Dirichlet to share parameters across bigrams in a topic in a manner similar to our use of PYPs, but it lacks a notion of the topic being shared between the words in an n-gram. The Hidden Topic Markov Model (HTMM) (Gruber et al., 2007) assumes that all words in a sentence have the same topic, and consecutive sentences are likely to have the same topic. By dropping the independence assumption among topics, HTMM is able to achieve lower perplexity scores than LDA at minimal additional computational costs. These models are unconcerned with topical n-grams and thus do not model phrases. Johnson (2010) presents an Adaptor Grammar model of topical phrases. Adaptor Grammars are a framework for specifying nonparametric Bayesian models over context-free grammars in which certain subtrees are \"memoized\" or remembered for reuse. In Johnson's model, subtrees corresponding to common phrases for a topic are memoized, resulting in a model in which each topic is associated with a distribution over whole phrases. While it is a theoretically elegant method for finding topical phrases, for large corpora we found inference to be impractically slow.", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 192, |
| "text": "Griffiths and Steyvers (2005)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 720, |
| "end": 741, |
| "text": "(Gruber et al., 2007)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1096, |
| "end": 1110, |
| "text": "Johnson (2010)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Perplexity is the typical information theoretic measure of language model quality used in lieu of extrinsic measures, which are more difficult and costly to run. However, it is well known that perplexity scores may negatively correlate with actual quality as assessed by humans (Chang et al., 2009) . With that fact in mind, we expanded the methodology of Chang et al. (2009) to create a \"phrase intrusion\" task that quantitatively compares the quality of the topical n-gram lists produced by our model against those of other models.", |
| "cite_spans": [ |
| { |
| "start": 278, |
| "end": 298, |
| "text": "(Chang et al., 2009)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 356, |
| "end": 375, |
| "text": "Chang et al. (2009)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Intrusion Experiment", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Each of 48 subjects underwent 80 trials of a webbased experiment on Amazon Mechanical Turk, a reliable (Paolacci et al., 2010) and increasingly common venue for conducting online experiments. In each trial, a subject is presented with a randomly ordered list of four n-grams (cf. Figure 4) . Each subject's task is to select the intruder phrase, a spurious n-gram not belonging with the others in the list. If, other than the intruder, the items in the list are all on the same topic, then subjects can easily identify the intruder because the list is semantically cohesive and makes sense. If the list is incohesive and has no discernible topic, subjects must guess arbitrarily and performance is at random.", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 126, |
| "text": "(Paolacci et al., 2010)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 280, |
| "end": 289, |
| "text": "Figure 4)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Phrase Intrusion Experiment", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To construct each trial's list, we chose two topics z and z (z = z ), then selected the three most probable n-grams from z and the intruder phrase, an n-gram probable in z and improbable in z. This design ensures that the intruder is not identifiable due solely to its being rare. Interspersed among the phrase intrusion trials were several simple screening trials intended to affirm that subjects possessed a minimal level of attentiveness and reading comprehension. For example, one such screening trial presented subjects with the list banana, apple, television, orange. Subjects who got any of these trials wrong were excluded from our analyses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Intrusion Experiment", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Each subject was presented with trials constructed from the output of PDLDA and TNG for unigrams, bigrams, and trigrams. For unigrams, we also tested the output of the original smoothed LDA (Blei et al., 2003) . The experiment was conducted twice for a 2,246-document subset of the TREC AP corpus (Blei et al., 2003; Harman, 1992) : the first time proceeded as described above, but the second time did not allow word repetition within a topic's list. The topical phrases found by TNG and PDLDA often revolve around a central n-gram, with other words pre-or post-appended to it. In this intrusion experiment, any n-gram not containing the central word or phrase may be trivially identifiable, regardless of its relevance to the topic. For example, the intruder in Trial 4 of Figure 4 is easily identifiable even if a subject does not understand English. This second experiment was designed to test whether our conclusions hinge on word repetition.", |
| "cite_spans": [ |
| { |
| "start": 190, |
| "end": 209, |
| "text": "(Blei et al., 2003)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 297, |
| "end": 316, |
| "text": "(Blei et al., 2003;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 317, |
| "end": 330, |
| "text": "Harman, 1992)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 774, |
| "end": 782, |
| "text": "Figure 4", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Phrase Intrusion Experiment", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We used the MALLET toolbox (McCallum, 2002) for the implementations of LDA and TNG. Each model was run with 100 topics for 5,000 iterations. We set m = 2, \u03b1 = .01, \u03b2 = .01, \u03bb = 1, \u03c0 1 = \u03c0 2 = 1, \u03c1 1 = 10, and \u03c1 2 = .1. For all models, we treated certain punctuation as the start of a phrase by setting c j = 1 for all tokens j immediately following periods, commas, semicolons, and exclamation and question marks. To reduce runtime, we removed stopwords occuring in the MALLET tool-box's stopword list. Because TNG and LDA had trouble with single character words not in the stoplist, we manually removed them before the experiment. Any token immediately following a removed word was treated as if it were the start of a phrase.", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 43, |
| "text": "(McCallum, 2002)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Intrusion Experiment", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As in Chang et al. (2009) , performance is measured via model precision, the fraction of subjects agreeing with the model. It is defined as MP m,n k = s 1(i m,n k,s = \u03c9 m,n k,s )/S where \u03c9 m,n k,s is the index of the intruding n-gram for subject s among the words generated from the kth topic of model m, i m,n k,s is the intruder selected by s, and S is the number of subjects. The model precisions are shown in Figure 5 . PDLDA achieves the highest precision in all conditions. Model precision is low in all models, which is a reflection of how challenging the task is on a small corpus laden with proper nouns and low-frequency words. Figure 5b demonstrates that the outcome of the experiment does not depend strongly on whether the topical n-gram lists have repeated words.", |
| "cite_spans": [ |
| { |
| "start": 6, |
| "end": 25, |
| "text": "Chang et al. (2009)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 413, |
| "end": 421, |
| "text": "Figure 5", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 638, |
| "end": 647, |
| "text": "Figure 5b", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Phrase Intrusion Experiment", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We presented a topic model which simultaneously segments a corpus into phrases of varying lengths and assigns topics to them. The topical phrases found by PDLDA are much richer sources of information than the topical unigrams typically produced in topic modeling. As evidenced by the phrase-intrusion experiment, the topical n-gram lists that PDLDA finds are much more interpretable than those found by TNG.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The formalism of Bayesian changepoint detection arose naturally from the intuitive assumption that the topic of a sequence of tokens changes periodically, and that the tokens in between changepoints comprise a phrase. This formalism provides a principled way to discover phrases within the LDA framework. We presented a model embodying these principles and showed how to incorporate dependent Pitman-Yor processes into it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The first author is supported by an NSF Graduate Research Fellowship. The first and second authors began this project while working at J.D. Power & Associates. We are indebted to Michael Mozer, Matt Wilder, and Nicolas Nicolov for their advice.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Bayesian online changepoint detection", |
| "authors": [ |
| { |
| "first": "Prescott", |
| "middle": [], |
| "last": "Ryan", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "J C" |
| ], |
| "last": "Adams", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mackay", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Prescott Adams and David J.C. MacKay. 2007. Bayesian online changepoint detection. Technical re- port, University of Cambridge, Cambridge, UK.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Latent dirichlet allocation", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [ |
| "M" |
| ], |
| "last": "Blei", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "I" |
| ], |
| "last": "Jordan", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "", |
| "pages": "993--1022", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. 2003. Latent dirichlet allocation. Jour- nal of Machine Learning Research, 3:993-1022.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Reading tea leaves: How humans interpret topic models", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jordan", |
| "middle": [], |
| "last": "Boyd-Graber", |
| "suffix": "" |
| }, |
| { |
| "first": "Sean", |
| "middle": [], |
| "last": "Gerrish", |
| "suffix": "" |
| }, |
| { |
| "first": "Chong", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "M" |
| ], |
| "last": "Blei", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan Chang, Jordan Boyd-Graber, Sean Gerrish, Chong Wang, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Neural Information Processing Systems (NIPS).", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "An empirical study of smoothing techniques for language modeling", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Stanley", |
| "suffix": "" |
| }, |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stanley F. Chen and Joshua Goodman. 1998. An empiri- cal study of smoothing techniques for language model- ing. Technical Report TR-10-98, Center for Research in Computing Technology, Harvard University.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Finding scientific topics", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [ |
| "L" |
| ], |
| "last": "Griffiths", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Steyvers", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the National Academy of Sciences", |
| "volume": "101", |
| "issue": "", |
| "pages": "5228--5235", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. L. Griffiths and M. Steyvers. 2004. Finding scien- tific topics. Proceedings of the National Academy of Sciences, 101(Suppl. 1):5228-5235, April.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Integrating topics and syntax", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [ |
| "L" |
| ], |
| "last": "Griffiths", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steyvers", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "M" |
| ], |
| "last": "Blei", |
| "suffix": "" |
| }, |
| { |
| "first": "Joshua", |
| "middle": [ |
| "B" |
| ], |
| "last": "Tenenbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "17", |
| "issue": "", |
| "pages": "537--544", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas L. Griffiths, Mark Steyvers, David M. Blei, and Joshua B. Tenenbaum. 2005. Integrating topics and syntax. In Advances in Neural Information Processing Systems 17, pages 537-544. MIT Press.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Topics in semantic representation", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [ |
| "L" |
| ], |
| "last": "Griffiths", |
| "suffix": "" |
| }, |
| { |
| "first": "Joshua", |
| "middle": [ |
| "B" |
| ], |
| "last": "Tenenbaum", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steyvers", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Psychological Review", |
| "volume": "114", |
| "issue": "", |
| "pages": "211--244", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas L. Griffiths, Joshua B. Tenenbaum, and Mark Steyvers. 2007. Topics in semantic representation. Psychological Review, 114:211-244.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Hidden topic Markov models", |
| "authors": [ |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Gruber", |
| "suffix": "" |
| }, |
| { |
| "first": "Yair", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "Michal", |
| "middle": [], |
| "last": "Rosen-Zvi", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Journal of Machine Learning Research -Proceedings Track", |
| "volume": "2", |
| "issue": "", |
| "pages": "163--170", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amit Gruber, Yair Weiss, and Michal Rosen-Zvi. 2007. Hidden topic Markov models. Journal of Machine Learning Research -Proceedings Track, 2:163-170.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Overview of the first text retrieval conference (trec-1)", |
| "authors": [ |
| { |
| "first": "Donna", |
| "middle": [], |
| "last": "Harman", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the first Text REtrieval Conference (TREC-1), Washington DC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Donna Harman. 1992. Overview of the first text re- trieval conference (trec-1). In Proceedings of the first Text REtrieval Conference (TREC-1), Washing- ton DC, USA.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "PCFGs, Topic Models, Adaptor Grammars and Learning Topical Collocations and the Structure of Proper Names", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Johnson. 2010. PCFGs, Topic Models, Adaptor Grammars and Learning Topical Collocations and the Structure of Proper Names. In Proceedings of the 48th", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Annual Meeting of the Association for Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1148--1157", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 1148-1157, Uppsala, Sweden, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "Susan", |
| "middle": [ |
| "T" |
| ], |
| "last": "Landauer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dumais", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Psychological Review", |
| "volume": "104", |
| "issue": "2", |
| "pages": "211--240", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas K. Landauer and Susan T. Dumais. 1997. A so- lution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211 -240.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Mallet: A machine learning for language toolkit", |
| "authors": [ |
| { |
| "first": "Andrew Kachites", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Kachites McCallum. 2002. Mal- let: A machine learning for language toolkit. http://mallet.cs.umass.edu.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Slice sampling. Annals of Statistics", |
| "authors": [ |
| { |
| "first": "Radford", |
| "middle": [], |
| "last": "Neal", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "31", |
| "issue": "", |
| "pages": "705--767", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Radford Neal. 2000. Slice sampling. Annals of Statis- tics, 31:705-767.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Running experiments on Amazon Mechanical Turk", |
| "authors": [ |
| { |
| "first": "Gabriele", |
| "middle": [], |
| "last": "Paolacci", |
| "suffix": "" |
| }, |
| { |
| "first": "Jesse", |
| "middle": [], |
| "last": "Chandler", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Panagiotis", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ipeirotis", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Judgment and Decision Making", |
| "volume": "5", |
| "issue": "5", |
| "pages": "411--419", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gabriele Paolacci, Jesse Chandler, and Panagiotis G. Ipeirotis. 2010. Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5):411-419.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Pitman", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Yor", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Annals of Probability", |
| "volume": "25", |
| "issue": "", |
| "pages": "855--900", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Pitman and M. Yor. 1997. The two-parameter Poisson- Dirichlet distribution derived from a stable subordina- tor. Annals of Probability, 25:855-900.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Combinatorial stochastic processes", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Pitman", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Pitman. 2002. Combinatorial stochastic processes. Technical Report 621, Department of Statistics, Uni- versity of California at Berkeley.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Is knowledge-free induction of multiword unit dictionary headwords a solved problem", |
| "authors": [ |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Schone", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "100--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patrick Schone and Daniel Jurafsky. 2001. Is knowledge-free induction of multiword unit dictionary headwords a solved problem? In Lillian Lee and Donna Harman, editors, Proceedings of the 2001 Con- ference on Empirical Methods in Natural Language Processing, pages 100-108.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A hierarchical Bayesian language model based on Pitman-Yor processes", |
| "authors": [ |
| { |
| "first": "Yee Whye", |
| "middle": [], |
| "last": "Teh", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44", |
| "volume": "", |
| "issue": "", |
| "pages": "985--992", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yee Whye Teh. 2006. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceed- ings of the 21st International Conference on Compu- tational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL- 44, pages 985-992, Morristown, NJ, USA. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Topic modeling: beyond bagof-words", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Hanna", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wallach", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 23rd International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "977--984", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hanna M. Wallach. 2006. Topic modeling: beyond bag- of-words. In Proceedings of the 23rd International Conference on Machine Learning, pages 977-984.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Topical n-grams: Phrase and topic discovery, with an application to information retrieval", |
| "authors": [ |
| { |
| "first": "Xuerui", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "Xing", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 7th IEEE International Conference on Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xuerui Wang, Andrew McCallum, and Xing Wei. 2007. Topical n-grams: Phrase and topic discovery, with an application to information retrieval. In Proceedings of the 7th IEEE International Conference on Data Min- ing.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "Experimental setup of the phrase intrusion experiment in which subjects must click on the ngram that does not belong.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "text": "An across-subject measure of the ability to detect intruders as a function of n-gram size and model. Excluding trials with repeated words does not qualitatively affect the results.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "text": "Six out of one hundred topics found by our model, PDLDA, on the Touchstone Applied Science Associates (TASA) corpus(Landauer and Dumais, 1997). Each column within a box shows the top fifteen phrases for a topic and is restricted to phrases of a minimum length of one, two, or three words, respectively. The rows are ordered by likelihood.", |
| "num": null, |
| "content": "<table><tr><td>matter</td><td>chemical reactions</td><td>like charges repel</td><td/><td/></tr><tr><td>atoms</td><td>atomic number</td><td>positively charged nucleus</td><td/><td/></tr><tr><td>elements</td><td>hydrogen atoms</td><td>unlike charges attract</td><td/><td/></tr><tr><td>electrons</td><td>hydrogen atom</td><td>outer energy level</td><td/><td/></tr><tr><td>atom</td><td>periodic table</td><td>reaction takes place</td><td/><td/></tr><tr><td>molecules</td><td>chemical change</td><td>negatively charged electrons</td><td/><td/></tr><tr><td>form</td><td>physical properties</td><td>chemical change takes place</td><td/><td/></tr><tr><td>oxygen</td><td>chemical reaction</td><td>form new substances</td><td/><td/></tr><tr><td>hydrogen</td><td>water molecules</td><td>physical change takes place</td><td/><td/></tr><tr><td>particles</td><td>sodium chloride</td><td>form sodium chloride</td><td/><td/></tr><tr><td>element</td><td>small amounts</td><td>modern atomic theory</td><td/><td/></tr><tr><td>solution</td><td>positive charge</td><td>electrically charged particles</td><td/><td/></tr><tr><td>substance</td><td>carbon atoms</td><td>increasing atomic number</td><td/><td/></tr><tr><td>reaction</td><td>physical change</td><td>second ionization energies</td><td/><td/></tr><tr><td>nucleus</td><td>chemical properties</td><td>higher energy levels</td><td/><td/></tr><tr><td/><td colspan=\"2\">(a) Topic 1</td><td colspan=\"2\">(b) Topic 2</td></tr><tr><td/><td colspan=\"2\">(c) Topic 3</td><td colspan=\"2\">(d) Topic 4</td></tr><tr><td/><td colspan=\"2\">(e) Topic 5</td><td colspan=\"2\">(f) Topic 6</td></tr><tr><td>Figure 1:</td><td/><td/><td/><td/></tr><tr><td>words word sentence write writing paragraph sentences meaning use subject language read example verb topic water air temperature heat liquid gas gases hot pressure atmosphere warm cold surface oxygen clouds</td><td>main idea topic sentence english language following paragraph words like quotation marks direct object word processing sentence tells figurative language writing process following sentences subject matter standard english use words water vapor air pollution air pressure warm air cold water earth's surface room temperature boiling point drinking water atmospheric pressure cold war high temperatures liquid water cold air warm water</td><td>word processing center word processing systems word processing equipment speak different languages use quotation marks single main idea use words like topic sentence states present perfect tense express complete thoughts word processing software use formal english standard american english collective noun refers formal standard english water vapor condenses warm air rises cold air mass called water vapor water vapor changes process takes place warm air mass clean air act gas called water vapor dry spell holds air pressure inside sewage treatment plant air pollution laws high melting points high melting point</td><td>president congress vote party constitution state members office government states elected representatives senate house washington energy used oil heat coal use fuel produce power source light electricity burn gas gasoline natural resources supreme court new york democratic party vice president political parties national government executive branch civil rights new government political party andrew jackson chief justice federal government state legislatures public opinion natural gas heat energy iron ore carbon dioxide potential energy solar energy light energy fossil fuels hot water steam engine large amounts sun's energy radiant energy nuclear energy china africa india europe people chinese asia egypt world rome land east trade countries empire middle east western europe north africa mediterranean sea years ago roman empire far east southeast asia west africa saudi arabia capital letter asia minor united states capital city centuries ago</td><td>civil rights act civil rights movement supreme court ruled president theodore roosevelt second continental congress equal rights amendment strong central government sherman antitrust act civil rights legislation public opinion polls major political parties congress shall make federal district court supreme court decisions american foreign policy nuclear power plants nuclear power plant important natural resources electric power plants called fossil fuels important natural resource produce large amounts called solar energy electric light bulb use electrical energy use solar energy carbon dioxide gas called potential energy gas called carbon dioxide called crude oil 2000 years ago east india company eastern united states 4000 years ago southwestern united states middle atlantic states northeastern united states western united states southeastern united states 200 years ago middle atlantic region indus river valley western roman empire british north america act coast guard station</td></tr></table>", |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |