ACL-OCL / Base_JSON /prefixP /json /P14 /P14-1034.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P14-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:04:31.677177Z"
},
"title": "Anchors Regularized: Adding Robustness and Extensibility to Scalable Topic-Modeling Algorithms",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Umiacs",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yuening",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {},
"email": "ynhu@cs.umd.edu"
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Spectral methods offer scalable alternatives to Markov chain Monte Carlo and expectation maximization. However, these new methods lack the rich priors associated with probabilistic models. We examine Arora et al.'s anchor words algorithm for topic modeling and develop new, regularized algorithms that not only mathematically resemble Gaussian and Dirichlet priors but also improve the interpretability of topic models. Our new regularization approaches make these efficient algorithms more flexible; we also show that these methods can be combined with informed priors.",
"pdf_parse": {
"paper_id": "P14-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "Spectral methods offer scalable alternatives to Markov chain Monte Carlo and expectation maximization. However, these new methods lack the rich priors associated with probabilistic models. We examine Arora et al.'s anchor words algorithm for topic modeling and develop new, regularized algorithms that not only mathematically resemble Gaussian and Dirichlet priors but also improve the interpretability of topic models. Our new regularization approaches make these efficient algorithms more flexible; we also show that these methods can be combined with informed priors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Topic models are of practical and theoretical interest. Practically, they have been used to understand political perspective (Paul and Girju, 2010) , improve machine translation (Eidelman et al., 2012) , reveal literary trends (Jockers, 2013) , and understand scientific discourse (Hall et al., 2008) . Theoretically, their latent variable formulation has served as a foundation for more robust models of other linguistic phenomena (Brody and Lapata, 2009) .",
"cite_spans": [
{
"start": 125,
"end": 147,
"text": "(Paul and Girju, 2010)",
"ref_id": "BIBREF27"
},
{
"start": 178,
"end": 201,
"text": "(Eidelman et al., 2012)",
"ref_id": "BIBREF13"
},
{
"start": 227,
"end": 242,
"text": "(Jockers, 2013)",
"ref_id": "BIBREF19"
},
{
"start": 281,
"end": 300,
"text": "(Hall et al., 2008)",
"ref_id": "BIBREF17"
},
{
"start": 432,
"end": 456,
"text": "(Brody and Lapata, 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Modern topic models are formulated as a latent variable model. Like hidden Markov models (Rabiner, 1989, HMM) , each token comes from one of K unknown distributions. Unlike a HMM, topic models assume that each document is an admixture of these hidden components called topics. Posterior inference discovers the hidden variables that best explain a dataset. Typical solutions use MCMC (Griffiths and Steyvers, 2004) or variational EM (Blei et al., 2003) , which can be viewed as local optimization: searching for the latent variables that maximize the data likelihood.",
"cite_spans": [
{
"start": 89,
"end": 109,
"text": "(Rabiner, 1989, HMM)",
"ref_id": null
},
{
"start": 384,
"end": 414,
"text": "(Griffiths and Steyvers, 2004)",
"ref_id": "BIBREF16"
},
{
"start": 433,
"end": 452,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An exciting vein of new research provides provable polynomial-time alternatives. These ap-proaches provide solutions to hidden Markov models (Anandkumar et al., 2012) , mixture models (Kannan et al., 2005) , and latent variable grammars (Cohen et al., 2013) . The key insight is not to directly optimize observation likelihood but to instead discover latent variables that can reconstruct statistics of the assumed generative model. Unlike search-based methods, which can be caught in local minima, these techniques are often guaranteed to find global optima.",
"cite_spans": [
{
"start": 141,
"end": 166,
"text": "(Anandkumar et al., 2012)",
"ref_id": "BIBREF0"
},
{
"start": 184,
"end": 205,
"text": "(Kannan et al., 2005)",
"ref_id": "BIBREF20"
},
{
"start": 237,
"end": 257,
"text": "(Cohen et al., 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These general techniques can be improved by making reasonable assumptions about the models. For example, Arora et al. (2012b) 's approach for inference in topic models assume that each topic has a unique \"anchor\" word (thus, we call this approach anchor). This approach is fast and effective; because it only uses word co-occurrence information, it can scale to much larger datasets than MCMC or EM alternatives. We review the anchor method in Section 2.",
"cite_spans": [
{
"start": 105,
"end": 125,
"text": "Arora et al. (2012b)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite their advantages, these techniques are not a panacea. They do not accommodate the rich priors that modelers have come to expect. Priors can improve performance (Wallach et al., 2009) , provide domain adaptation (Daum\u00e9 III, 2007; Finkel and Manning, 2009) , and guide models to reflect users' needs (Hu et al., 2013) . In Section 3, we regularize the anchor method to trade-off the reconstruction fidelity with the penalty terms that mimic Gaussian and Dirichlet priors.",
"cite_spans": [
{
"start": 168,
"end": 190,
"text": "(Wallach et al., 2009)",
"ref_id": "BIBREF36"
},
{
"start": 219,
"end": 236,
"text": "(Daum\u00e9 III, 2007;",
"ref_id": "BIBREF10"
},
{
"start": 237,
"end": 262,
"text": "Finkel and Manning, 2009)",
"ref_id": "BIBREF14"
},
{
"start": 306,
"end": 323,
"text": "(Hu et al., 2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another shortcoming is that these models have not been scrutinized using standard NLP evaluations. Because these approaches emerged from the theory community, anchor's evaluations, when present, typically use training reconstruction. In Section 4, we show that our regularized models can generalize to previously unseen data-as measured by held-out likelihood (Blei et al., 2003) -and are more interpretable (Chang et al., 2009; Newman et al., 2010) . We also show that our extension to the anchor method enables new applications: for K number of topics V vocabulary size M document frequency: minimum documents an anchor word candidate must appear in Q word co-occurrence matrix",
"cite_spans": [
{
"start": 360,
"end": 379,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF5"
},
{
"start": 408,
"end": 428,
"text": "(Chang et al., 2009;",
"ref_id": "BIBREF7"
},
{
"start": 429,
"end": 449,
"text": "Newman et al., 2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Qi,j = p(w1 = i, w2 = j) Q conditional distribution of Q Qi,j = p(w1 = j | w2 = i) Qi,\u2022 row i ofQ A topic matrix, of size V \u00d7 K A j,k = p(w = j | z = k) C anchor coefficient of size K \u00d7 V C j,k = p(z = k | w = j) S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "set of anchor word indexes {s1, . . . sK } \u03bb regularization weight Table 1 : Notation used. Matrices are in bold (Q, C), sets are in script S example, using an informed priors to discover concepts of interest.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Having shown that regularization does improve performance, in Section 5 we explore why. We discuss the trade-off of training data reconstruction with sparsity and why regularized topics are more interpretable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we briefly review the anchor method and place it in the context of topic model inference. Once we have established the anchor objective function, in the next section we regularize the objective function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "Rethinking Data: Word Co-occurrence Inference in topic models can be viewed as a black box: given a set of documents, discover the topics that best explain the data. The difference between anchor and conventional inference is that while conventional methods take a collection of documents as input, anchor takes word co-occurrence statistics. Given a vocabulary of size V , we represent this joint distribution as Q i,j = p(w 1 = i, w 2 = j), each cell represents the probability of words appearing together in a document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "Like other topic modeling algorithms, the output of the anchor method is the topic word distributions A with size V * K, where K is the total number of topics desired, a parameter of the algorithm. The k th column of A will be the topic distribution over all words for topic k, and A w,k is the probability of observing type w given topic k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "Anchors: Topic Representatives The anchor method (Arora et al., 2012a) is based on the separability assumption (Donoho and Stodden, 2003) , which assumes that each topic contains at least one namesake \"anchor word\" that has non-zero probability only in that topic. Intuitively, this means that each topic has unique, specific word that, when used, identifies that topic. For example, while \"run\", \"base\", \"fly\", and \"shortstop\" are associated with a topic about baseball, only \"shortstop\" is unambiguous, so it could serve as this topic's anchor word.",
"cite_spans": [
{
"start": 49,
"end": 70,
"text": "(Arora et al., 2012a)",
"ref_id": "BIBREF2"
},
{
"start": 111,
"end": 137,
"text": "(Donoho and Stodden, 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "Let's assume that we knew what the anchor words were: a set S that indexes rows in Q. Now consider the conditional distribution of word i, the probability of the rest of the vocabulary given an observation of word i; we represent this asQ i,\u2022 , as we can construct this by normalizing the rows of Q. For an anchor word s a \u2208 S, this will look like a topic;Q \"shortstop\",\u2022 will have high probability for words associated with baseball.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "The key insight of the anchor algorithm is that the conditional distribution of polysemous nonanchor words can be reconstructed as a linear combination of the conditional distributions of anchor words. For example,Q \"fly\",\u2022 could be reconstructed by combining the anchor words \"insecta\", \"boeing\", and \"shortshop\". We represent the coefficients of this reconstruction as a matrix C, where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C i,k = p(z = k | w = i). Thus, for any word i, Q i,\u2022 \u2248 s k \u2208S C i,kQs k ,\u2022 .",
"eq_num": "(1)"
}
],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "The coefficient matrix is not the usual output of a topic modeling algorithm. The usual output is the probability of a word given a topic. The coefficient matrix C is the probability of a topic given a word. We use Bayes rule to recover the topic distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w = i|z = k) \u2261 A i,k \u221d p(z = k|w = i)p(w = i) = C i,k jQ i,j",
"eq_num": "(2)"
}
],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "where p(w) is the normalizer of Q to obtainQ w,\u2022 . The geometric argument for finding the anchor words is one of the key contributions of Arora et al. (2012a) and is beyond the scope of this paper. The algorithms in Section 3 use the anchor selection subroutine unchanged. The difference in our approach is in how we discover the anchor coefficients C.",
"cite_spans": [
{
"start": 138,
"end": 158,
"text": "Arora et al. (2012a)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "From Anchors to Topics After we have the anchor words, we need to find the coefficients that best reconstruct the dataQ (Equation 1). Arora et al. (2012a) chose the C that minimizes the KL divergence betweenQ i,\u2022 and the reconstruction based on the anchor word's conditional word vec-",
"cite_spans": [
{
"start": 134,
"end": 154,
"text": "Arora et al. (2012a)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "tors s k \u2208S C i,kQs k ,\u2022 , C i,\u2022 = argmin C i,\u2022 D KL \uf8eb \uf8edQ i,\u2022 || s k \u2208S C i,kQs k ,\u2022 \uf8f6 \uf8f8 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "(3) The anchor method is fast, as it only depends on the size of the vocabulary once the cooccurrence statistics Q are obtained. However, it does not support rich priors for topic models, while MCMC (Griffiths and Steyvers, 2004) and variational EM (Blei et al., 2003) methods can. This prevents models from using priors to guide the models to discover particular themes (Zhai et al., 2012) , or to encourage sparsity in the models (Yao et al., 2009) . In the rest of this paper, we correct this lacuna by adding regularization inspired by Bayesian priors to the anchor algorithm.",
"cite_spans": [
{
"start": 199,
"end": 229,
"text": "(Griffiths and Steyvers, 2004)",
"ref_id": "BIBREF16"
},
{
"start": 249,
"end": 268,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF5"
},
{
"start": 371,
"end": 390,
"text": "(Zhai et al., 2012)",
"ref_id": "BIBREF38"
},
{
"start": 432,
"end": 450,
"text": "(Yao et al., 2009)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Words: Scalable Topic Models",
"sec_num": "2"
},
{
"text": "In this section, we add regularizers to the anchor objective (Equation 3). In this section, we briefly review regularizers and then add two regularizers, inspired by Gaussian (L 2 , Section 3.1) and Dirichlet priors (Beta, Section 3.2), to the anchor objective function (Equation 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Regularization",
"sec_num": "3"
},
{
"text": "Regularization terms are ubiquitous. They typically appear as an additional term in an optimization problem. Instead of optimizing a function just of the data x and parameters \u03b2, f (x, \u03b2), one optimizes an objective function that includes a regularizer that is only a function of parameters: f (w, \u03b2) + r(\u03b2). Regularizers are critical in staid methods like linear regression (Ng, 2004) , in workhorse methods such as maximum entropy modeling (Dud\u00edk et al., 2004) , and also in emerging fields such as deep learning (Wager et al., 2013) .",
"cite_spans": [
{
"start": 375,
"end": 385,
"text": "(Ng, 2004)",
"ref_id": "BIBREF26"
},
{
"start": 442,
"end": 462,
"text": "(Dud\u00edk et al., 2004)",
"ref_id": "BIBREF12"
},
{
"start": 515,
"end": 535,
"text": "(Wager et al., 2013)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Regularization",
"sec_num": "3"
},
{
"text": "In addition to being useful, regularization terms are appealing theoretically because they often correspond to probabilistic interpretations of parameters. For example, if we are seeking the MLE of a probabilistic model parameterized by \u03b2, p(x|\u03b2), adding a regularization term r(\u03b2) = L i=1 \u03b2 2 i corresponds to adding a Gaussian prior and maximizing log probability of the posterior (ignoring constant terms) (Rennie, 2003).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Regularization",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (\u03b2 i ) = 1 \u221a 2\u03c0\u03c3 2 exp \u2212 \u03b2 2 i 2\u03c3 2",
"eq_num": "(4)"
}
],
"section": "Adding Regularization",
"sec_num": "3"
},
{
"text": "The simplest form of regularization we can add is L 2 regularization. This is similar to assuming that probability of a word given a topic comes from a Gaussian distribution. While the distribution over topics is typically Dirichlet, Dirichlet distributions have been replaced by logistic normals in topic modeling applications (Blei and Lafferty, 2005) and for probabilistic grammars of language (Cohen and Smith, 2009) .",
"cite_spans": [
{
"start": 328,
"end": 353,
"text": "(Blei and Lafferty, 2005)",
"ref_id": "BIBREF4"
},
{
"start": 397,
"end": 420,
"text": "(Cohen and Smith, 2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "L 2 Regularization",
"sec_num": "3.1"
},
{
"text": "Augmenting the anchor objective with an L 2 penalty yields",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "L 2 Regularization",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C i,\u2022 =argmin C i,\u2022 D KL \uf8eb \uf8edQ i,\u2022 || s k \u2208S C i,kQs k ,\u2022 \uf8f6 \uf8f8 + \u03bb C i,\u2022 \u2212 \u00b5 i,\u2022 2 2 ,",
"eq_num": "(5)"
}
],
"section": "L 2 Regularization",
"sec_num": "3.1"
},
{
"text": "where regularization weight \u03bb balances the importance of a high-fidelity reconstruction against the regularization, which encourages the anchor coefficients to be close to the vector \u00b5. When the mean vector \u00b5 is zero, this encourages the topic coefficients to be zero. In Section 4.3, we use a non-zero mean \u00b5 to encode an informed prior to encourage topics to discover specific concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "L 2 Regularization",
"sec_num": "3.1"
},
{
"text": "The more common prior for topic models is a Dirichlet prior (Minka, 2000) . However, we cannot apply this directly because the optimization is done on a row-by-row basis of the anchor coefficient matrix C, optimizing C for a fixed word w for and all topics. If we want to model the probability of a word, it must be the probability of word w in a topic versus all other words. Modeling this dichotomy (one versus all others in a topic) is possible. The constructive definition of the Dirichlet distribution (Sethuraman, 1994) states that if one has a V -dimensional multinomial \u03b8 \u223c Dir(\u03b1 1 . . . \u03b1 V ), then the marginal distribution of \u03b8 w follows \u03b8 w \u223c Beta(\u03b1 w , i =w \u03b1 i ). This is the tool we need to consider the distribution of a single word's probability.",
"cite_spans": [
{
"start": 60,
"end": 73,
"text": "(Minka, 2000)",
"ref_id": "BIBREF24"
},
{
"start": 507,
"end": 525,
"text": "(Sethuraman, 1994)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Beta Regularization",
"sec_num": "3.2"
},
{
"text": "This requires including the topic matrix as part of the objective function. The topic matrix is a linear transformation of the coefficient matrix (Equation 2). The objective for beta regularization becomes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beta Regularization",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C i,\u2022 =argmin C i,\u2022 D KL \uf8eb \uf8edQ i,\u2022 || s k \u2208S C i,kQs k ,\u2022 \uf8f6 \uf8f8 \u2212 \u03bb s k \u2208S log (Beta(A i,k ; a, b)),",
"eq_num": "(6)"
}
],
"section": "Beta Regularization",
"sec_num": "3.2"
},
{
"text": "where \u03bb again balances reconstruction against the regularization. To ensure the tractability of this algorithm, we enforce a convex regularization function, which requires that a > 1 and b > 1. If we enforce a uniform prior-E Beta(a,b) [A i,k ] = 1 Vand that the mode of the distribution is also 1 V , 1 this gives us the following parametric form for a and b:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beta Regularization",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a = x V + 1, and b = (V \u2212 1)x V + 1",
"eq_num": "(7)"
}
],
"section": "Beta Regularization",
"sec_num": "3.2"
},
{
"text": "for real x greater than zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beta Regularization",
"sec_num": "3.2"
},
{
"text": "Equation 5 and Equation 6 are optimized using L-BFGS gradient optimization (Galassi et al., 2003) . We initialize C randomly from Dir(\u03b1) with \u03b1 = 60 V (Wallach et al., 2009) . We update C after optimizing all V rows. The newly updated C replaces the old topic coefficients. We track how much the topic coefficients C change between two consecutive iterations i and i + 1 and represent it as \u2206C \u2261 C i+1 \u2212 C i 2 . We stop optimization when \u2206C \u2264 \u03b4. When \u03b4 = 0.1, the L 2 and unregularized anchor algorithm converges after a single iteration, while beta regularization typically converges after fewer than ten iterations (Figure 4 ).",
"cite_spans": [
{
"start": 75,
"end": 97,
"text": "(Galassi et al., 2003)",
"ref_id": "BIBREF15"
},
{
"start": 151,
"end": 173,
"text": "(Wallach et al., 2009)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 617,
"end": 626,
"text": "(Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Initialization and Convergence",
"sec_num": "3.3"
},
{
"text": "In this section, we measure the performance of our proposed regularized anchor word algorithms. We will refer to specific algorithms in bold. For example, the original anchor algorithm is anchor. Our L 2 regularized variant is anchor-L 2 , and our beta regularized variant is anchor-beta. To provide conventional baselines, we also compare our methods against topic models from variational inference (Blei et al., 2003, variational) and MCMC (Griffiths and Steyvers, 2004; McCallum, 2002, MCMC) .",
"cite_spans": [
{
"start": 400,
"end": 432,
"text": "(Blei et al., 2003, variational)",
"ref_id": null
},
{
"start": 442,
"end": 472,
"text": "(Griffiths and Steyvers, 2004;",
"ref_id": "BIBREF16"
},
{
"start": 473,
"end": 494,
"text": "McCallum, 2002, MCMC)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization Improves Topic Models",
"sec_num": "4"
},
{
"text": "We apply these inference strategies on three diverse corpora: scientific articles from the Neural Information Processing Society (NIPS), 2 Internet newsgroups postings (20NEWS), 3 and New York Times editorials (Sandhaus, 2008, NYT) . Statistics for the datasets are summarized in Table 2 . We split each dataset into a training fold (70%), development fold (15%), and a test fold (15%): the training data are used to fit models; the development set are used to select parameters (anchor threshold M , document prior \u03b1, regularization weight \u03bb); and final results are reported on the test fold.",
"cite_spans": [
{
"start": 178,
"end": 179,
"text": "3",
"ref_id": null
},
{
"start": 210,
"end": 231,
"text": "(Sandhaus, 2008, NYT)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 280,
"end": 287,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Regularization Improves Topic Models",
"sec_num": "4"
},
{
"text": "We use two evaluation measures, held-out likelihood (Blei et al., 2003, HL) and topic interpretability (Chang et al., 2009; Newman et al., 2010, TI) . Held-out likelihood measures how well the model can reconstruct held-out documents that the model has never seen before. This is the typical evaluation for probabilistic models. Topic interpretability is a more recent metric to capture how useful the topics can be to human users attempting to make sense of a large datasets.",
"cite_spans": [
{
"start": 52,
"end": 75,
"text": "(Blei et al., 2003, HL)",
"ref_id": null
},
{
"start": 103,
"end": 123,
"text": "(Chang et al., 2009;",
"ref_id": "BIBREF7"
},
{
"start": 124,
"end": 148,
"text": "Newman et al., 2010, TI)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization Improves Topic Models",
"sec_num": "4"
},
{
"text": "Held-out likelihood cannot be computed with existing anchor algorithms, so we use the topic distributions learned from anchor as input to a reference variational inference implementation (Blei et al., 2003) to compute HL. This requires an additional parameter, the Dirichlet prior \u03b1 for the per-document distribution over topics. We select \u03b1 using grid search on the development set.",
"cite_spans": [
{
"start": 187,
"end": 206,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization Improves Topic Models",
"sec_num": "4"
},
{
"text": "To compute TI and evaluate topic coherence, we use normalized pairwise mutual information (NPMI) (Lau et al., 2014) over topics' twenty most probable words. Topic coherence is computed against the NPMI of a reference corpus. For coherence evaluations, we use both intrinsic and extrinsic text collections to compute NPMI. Intrinsic coherence (TI-i) is computed on training and development data at development time and on training and test data at test time. Extrinsic coherence (TI-e) is computed from English Wikipedia articles, with disjoint halves (1.1 million pages each) for distinct development and testing TI-e evaluation. Figure 2 : Selection of \u03bb based on HL and TI scores on the development set. The value of \u03bb = 0 is equivalent to the original anchor algorithm; regularized versions find better solutions as the regularization weight \u03bb becomes non-zero.",
"cite_spans": [],
"ref_spans": [
{
"start": 630,
"end": 638,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Regularization Improves Topic Models",
"sec_num": "4"
},
{
"text": "Anchor Threshold A good anchor word must have a unique, specific context but also explain other words well. A word that appears only once will have a very specific cooccurence pattern but will explain other words' coocurrence poorly because the observations are so sparse. As discussed in Section 2, the anchor method uses document frequency M as a threshold to only consider words with robust counts. Because all regularizations benefit equally from higher-quality anchor words, we use crossvalidation to select the document frequency cutoff M using the unregularized anchor algorithm. Figure 1 shows the performance of anchor with different M on our three datasets with 20 topics for our two measures HL and TI-i.",
"cite_spans": [],
"ref_spans": [
{
"start": 587,
"end": 595,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grid Search for Parameters on Development Set",
"sec_num": "4.1"
},
{
"text": "Regularization Weight Once we select a cutoff M for each combination of dataset, number of topics K and a evaluation measure, we select a regularization weight \u03bb on the development set. Figure 2 shows that beta regularization framework improves topic interpretability TI-i on all datasets and improved the held-out likelihood HL on 20NEWS. The L 2 regularization also improves held-out likelihood HL for the 20NEWS corpus (Figure 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 194,
"text": "Figure 2",
"ref_id": null
},
{
"start": 422,
"end": 431,
"text": "(Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grid Search for Parameters on Development Set",
"sec_num": "4.1"
},
{
"text": "In the interests of space, we do not show the figures for selecting M and \u03bb using TI-e, which is similar to TI-i: anchor-beta improves TI-e score on all datasets, anchor-L 2 improves TI-e on 20NEWS and NIPS with 20 topics and NYT with 40 topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grid Search for Parameters on Development Set",
"sec_num": "4.1"
},
{
"text": "With document frequency M and regularization weight \u03bb selected from the development set, we compare the performance of those models on the test set. We also compare with standard implementations of Latent Dirichlet Allocation: Blei's LDAC (variational) and Mallet (mcmc). We run 100 iterations for LDAC and 5000 iterations for Mallet.",
"cite_spans": [
{
"start": 239,
"end": 252,
"text": "(variational)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Regularization",
"sec_num": "4.2"
},
{
"text": "Each result is averaged over three random runs and appears in Figure 3 . The highly-tuned, widelyused implementations uniformly have better heldout likelihood than anchor-based methods, but the much faster anchor methods are often comparable. Within anchor-based methods, L 2 -regularization offers comparable held-out likelihood as unregularized anchor, while anchor-beta often has better interpretability. Because of the mismatch between the specialized vocabulary of NIPS and the generalpurpose language of Wikipedia, TI-e has a high variance.",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 70,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating Regularization",
"sec_num": "4.2"
},
{
"text": "A frequent use of priors is to add information to a model. This is not possible with the existing anchor method. An informed prior for topic models seeds a topic with words that describe a topic of interest. In a topic model, these seeds will serve as a \"magnet\", attracting similar words to the topic (Zhai et al., 2012) .",
"cite_spans": [
{
"start": 302,
"end": 321,
"text": "(Zhai et al., 2012)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Informed Regularization",
"sec_num": "4.3"
},
{
"text": "We can achieve a similar goal with anchor-L 2 . Instead of encouraging anchor coefficients to be zero in Equation 5, we can instead encourage word probabilities to close to an arbitrary mean \u00b5 i,k . This vector can reflect expert knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informed Regularization",
"sec_num": "4.3"
},
{
"text": "One example of a source of expert knowledge is Linguistic Inquiry and Word Count (Pennebaker and Francis, 1999, LIWC) , a dictionary of keywords related to sixty-eight psychological concepts such as positive emotions, negative emotions, and death. For example, it associates \"excessive, estate, money, cheap, expensive, living, profit, live, rich, income, poor, etc.\" for the concept materialism.",
"cite_spans": [
{
"start": 81,
"end": 117,
"text": "(Pennebaker and Francis, 1999, LIWC)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Informed Regularization",
"sec_num": "4.3"
},
{
"text": "We associate each anchor word with its closest LIWC category based on the cooccurrence matrix Q. This is computed by greedily finding the anchor word that has the highest cooccurrence score for any LIWC category: we define the score of a category to anchor word w s k as i Q s k ,i , where i ranges over words in this category; we compute the scores of all categories to all anchor words; then we find the highest score and assign the category to that anchor word; we greedily repeat this process until all anchor words have a category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informed Regularization",
"sec_num": "4.3"
},
{
"text": "Given these associations, we create a goal mean \u00b5 i,k . If there are L i anchor words associated with LIWC word i, \u00b5 i,k = 1 L i if this keyword i is associated with anchor word w s k and zero otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informed Regularization",
"sec_num": "4.3"
},
{
"text": "We apply anchor-L 2 with informed priors on NYT with twenty topics and compared the topics against the original topics from anchor. Table 3 shows that the topic with anchor word \"soviet\", when combined with LIWC, draws in the new words \"bush\" and \"nuclear\"; reflecting the threats of force during the cold war. For the topic with topic word \"arms\", when associated with the LIWC category with the terms \"agree\" and \"agreement\", draws in \"clinton\", who represented a more conciliatory foreign policy compared to his republican predecessors.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Informed Regularization",
"sec_num": "4.3"
},
{
"text": "Having shown that regularization can improve the anchor topic modeling algorithm, in this section we discuss why these regularizations can improve the model and the implications for practitioners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Efficiency Efficiency is a function of the number of iterations and the cost of each iteration. Both anchor and anchor-L 2 require a single iteration, although the latter's iteration is slightly more expensive. For beta, as described in Section 3.2, we update anchor coefficients C row by row, and then repeat the process over several iterations until it converges. However, it often converges within ten iterations (Figure 4 ) on all three datasets: this requires much fewer iterations than MCMC or variational inference, and the iterations are less expensive. In addition, since we optimize each row C i,\u2022 independently, the algorithm can be easily parallelized.",
"cite_spans": [],
"ref_spans": [
{
"start": 416,
"end": 425,
"text": "(Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Sensitivity to Document Frequency While the original anchor is sensitive to the document frequency M (Figure 1 ), adding regularization makes this less critical. Both anchor-L 2 and anchor-beta are less sensitive to M than anchor.",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 110,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "To highlight this, we compare the topics of anchor and anchor-beta when M = 100. As Table 4 shows, the words \"article\", \"write\", \"don\" and \"doe\" appear in most of anchor's topics. While anchor-L 2 also has some bad topics, it still can find reasonable topics, demonstrating anchor-beta's greater robustness to suboptimal M . L 2 (Sometimes) Improves Generalization As Figure 2 shows, anchor-L 2 sometimes improves held-out development likelihood for the smaller topic number Figure 3 : Comparing anchor-beta and anchor-L 2 against the original anchor and the traditional variational and MCMC on HL score and TI score. variational and mcmc provide the best held-out generalization. anchor-beta sometimes gives the best TI score and is consistently better than anchor. The specialized vocabulary of NIPS causes high variance for the extrinsic interpretability evaluation (TI-e). Table 3 : Examples of topic comparison between anchor and informed anchor-L 2 . A topic is labeled with the anchor word for that topic. The bold words are the informed prior from LIWC. With an informed prior, relevant words appear in the top words of a topic; this also draws in other related terms (red).",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 91,
"text": "As Table 4",
"ref_id": "TABREF6"
},
{
"start": 368,
"end": 376,
"text": "Figure 2",
"ref_id": null
},
{
"start": 475,
"end": 483,
"text": "Figure 3",
"ref_id": null
},
{
"start": 877,
"end": 884,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "20NEWS corpus. However, the \u03bb selected on development data does not always improve test set performance. This, in Figure 3 , anchor-beta closely tracks anchor. Thus, L 2 regularization does not hurt generalization while imparting expressiveness and robustness to parameter settings.",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 122,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Beta Improves Interpretability Figure 3 shows that anchor-beta improves topic interpretability (TI) compared to unregularized anchor methods. In this section, we try to understand why.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We first compare the topics from the original anchor against anchor-beta to analyze the topics qualitatively. Table 5 shows that beta regularization promotes rarer words within a topic and demotes common words. For example, in the topic about hockey with the anchor word game, \"run\" and \"good\"-ambiguous, polysemous words-in the unregularized topic are replaced by \"playoff\" and \"trade\" in the regularized topic. These words are less ambiguous and more likely to make sense to a consumer of topic models. Figure 5 shows why this happens. Compared to the unregularized topics from anchor, the beta regularized topic steals from the rich and creates a more uniform distribution. Thus, highly frequent words do not as easily climb to the top of the distribution, and the topics reflect topical, relevant words rather than globally frequent terms.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 5",
"ref_id": "TABREF8"
},
{
"start": 505,
"end": 513,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "A topic model is a popular tool for quickly getting the gist of large corpora. However, running such an analysis on these large corpora entail a substantial computational cost. While techniques such as anchor algorithms offer faster solutions, it comes at the cost of the expressive priors common in Bayesian formulations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This paper introduces two different regulariza-tions that offer users more interpretable models and the ability to inject prior knowledge without sacrificing the speed and generalizability of the underlying approach. However, one sacrifice that this approach does make is the beautiful theoretical guarantees of previous work. An important piece of future work is a theoretical understanding of generalizability in extensible, regularized models. Incorporating other regularizations could further improve performance or unlock new applications. Our regularizations do not explicitly encourage sparsity; applying other regularizations such as L 1 could encourage true sparsity (Tibshirani, 1994) , and structured priors (Andrzejewski et al., 2009) could efficiently incorporate constraints on topic models.",
"cite_spans": [
{
"start": 676,
"end": 694,
"text": "(Tibshirani, 1994)",
"ref_id": "BIBREF34"
},
{
"start": 719,
"end": 746,
"text": "(Andrzejewski et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "These regularizations could improve spectral algorithms for latent variables models, improving the performance for other NLP tasks such as latent variable PCFGs (Cohen et al., 2013) and HMMs (Anandkumar et al., 2012) , combining the flexibility and robustness offered by priors with the speed and accuracy of new, scalable algorithms. Rank of word in topic (topic shown by anchor word) log p(word | topic) Figure 5 : How beta regularization influences the topic distribution. Each topic is identified with its associated anchor word. Compared to the unregularized anchor method, anchor-beta steals probability mass from the \"rich\" and prefers a smoother distribution of probability mass. These words often tend to be unimportant, polysemous words common across topics. ",
"cite_spans": [
{
"start": 161,
"end": 181,
"text": "(Cohen et al., 2013)",
"ref_id": "BIBREF9"
},
{
"start": 191,
"end": 216,
"text": "(Anandkumar et al., 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 406,
"end": 414,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "For a, b < 1, the expected value is still the uniform distribution but the mode lies at the boundaries of the simplex. This corresponds to a sparse Dirichlet distribution, which our optimization cannot at present model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://cs.nyu.edu/\u02dcroweis/data.html 3 http://qwone.com/\u02dcjason/20Newsgroups/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers, Hal Daum\u00e9 III, Ke Wu, and Ke Zhai for their helpful comments. This work was supported by NSF Grant IIS-1320538. Boyd-Graber is also supported by NSF Grant CCF-1018625. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A method of moments for mixture models and hidden markov models",
"authors": [
{
"first": "Animashree",
"middle": [],
"last": "Anandkumar",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Sham",
"middle": [
"M"
],
"last": "Kakade",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of Conference on Learning Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Animashree Anandkumar, Daniel Hsu, and Sham M. Kakade. 2012. A method of moments for mixture models and hidden markov models. In Proceedings of Conference on Learning Theory.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Incorporating domain knowledge into topic modeling via Dirichlet forest priors",
"authors": [
{
"first": "David",
"middle": [],
"last": "Andrzejewski",
"suffix": ""
},
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Craven",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the International Conference of Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Andrzejewski, Xiaojin Zhu, and Mark Craven. 2009. Incorporating domain knowledge into topic modeling via Dirichlet forest priors. In Proceedings of the International Conference of Machine Learn- ing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A practical algorithm for topic modeling with provable guarantees",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Rong",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Yoni",
"middle": [],
"last": "Halpern",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Mimno",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Moitra",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Yichen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Rong Ge, Yoni Halpern, David M. Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. 2012a. A practical algorithm for topic modeling with provable guarantees. CoRR, abs/1212.4777.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning topic models -going beyond svd. CoRR, abs/1204",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Rong",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Moitra",
"suffix": ""
}
],
"year": 1956,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Rong Ge, and Ankur Moitra. 2012b. Learning topic models -going beyond svd. CoRR, abs/1204.1956.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Correlated topic models",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "John",
"middle": [
"D"
],
"last": "Blei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei and John D. Lafferty. 2005. Correlated topic models. In Proceedings of Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bayesian word sense induction",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Brody",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Brody and Mirella Lapata. 2009. Bayesian word sense induction. In Proceedings of the Euro- pean Chapter of the Association for Computational Linguistics, Athens, Greece.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Reading tea leaves: How humans interpret topic models",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Gerrish",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Chang, Jordan Boyd-Graber, Chong Wang, Sean Gerrish, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Proceedings of Advances in Neural Information Pro- cessing Systems.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction",
"authors": [
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2009,
"venue": "Conference of the North American Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay B. Cohen and Noah A. Smith. 2009. Shared lo- gistic normal distributions for soft parameter tying in unsupervised grammar induction. In Conference of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Experiments with spectral learning of latent-variable PCFGs",
"authors": [
{
"first": "Shay",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Stratos",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Dean",
"middle": [
"P"
],
"last": "Foster",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2013,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay Cohen, Karl Stratos, Michael Collins, Dean P. Foster, and Lyle Ungar. 2013. Experiments with spectral learning of latent-variable PCFGs. In Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Frustratingly easy domain adaptation",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III. 2007. Frustratingly easy domain adap- tation. In Proceedings of the Association for Com- putational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "When does non-negative matrix factorization give correct decomposition into parts? page",
"authors": [
{
"first": "David",
"middle": [],
"last": "Donoho",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Stodden",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Donoho and Victoria Stodden. 2003. When does non-negative matrix factorization give correct decomposition into parts? page 2004. MIT Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Performance guarantees for regularized maximum entropy density estimation",
"authors": [
{
"first": "Miroslav",
"middle": [],
"last": "Dud\u00edk",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Phillips",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Conference on Learning Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miroslav Dud\u00edk, Steven J. Phillips, and Robert E. Schapire. 2004. Performance guarantees for reg- ularized maximum entropy density estimation. In Proceedings of Conference on Learning Theory.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Topic models for dynamic translation model adaptation",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Eidelman",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Eidelman, Jordan Boyd-Graber, and Philip Resnik. 2012. Topic models for dynamic translation model adaptation. In Proceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hierarchical bayesian domain adaptation",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel and Christopher D. Manning. 2009. Hierarchical bayesian domain adaptation. In Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, Morristown, NJ, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Gnu Scientific Library: Reference Manual. Network Theory Ltd",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Galassi",
"suffix": ""
},
{
"first": "Jim",
"middle": [],
"last": "Davies",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Theiler",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Gough",
"suffix": ""
},
{
"first": "Gerard",
"middle": [],
"last": "Jungman",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Booth",
"suffix": ""
},
{
"first": "Fabrice",
"middle": [],
"last": "Rossi",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Galassi, Jim Davies, James Theiler, Brian Gough, Gerard Jungman, Michael Booth, and Fabrice Rossi. 2003. Gnu Scientific Library: Reference Manual. Network Theory Ltd.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Finding scientific topics",
"authors": [
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "101",
"issue": "1",
"pages": "5228--5235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L. Griffiths and Mark Steyvers. 2004. Find- ing scientific topics. Proceedings of the National Academy of Sciences, 101(Suppl 1):5228-5235.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Studying the history of ideas using topic models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of Emperical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Hall, Daniel Jurafsky, and Christopher D. Man- ning. 2008. Studying the history of ideas using topic models. In Proceedings of Emperical Methods in Natural Language Processing.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Interactive topic modeling",
"authors": [
{
"first": "Yuening",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Brianna",
"middle": [],
"last": "Satinoff",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Machine Learning Journal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuening Hu, Jordan Boyd-Graber, Brianna Satinoff, and Alison Smith. 2013. Interactive topic modeling. Machine Learning Journal.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Macroanalysis: Digital Methods and Literary History. Topics in the Digital Humanities",
"authors": [
{
"first": "Matt",
"middle": [
"L"
],
"last": "Jockers",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt L. Jockers. 2013. Macroanalysis: Digital Meth- ods and Literary History. Topics in the Digital Hu- manities. University of Illinois Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The spectral method for general mixture models",
"authors": [
{
"first": "Ravindran",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Hadi",
"middle": [],
"last": "Salmasian",
"suffix": ""
},
{
"first": "Santosh",
"middle": [],
"last": "Vempala",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Conference on Learning Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ravindran Kannan, Hadi Salmasian, and Santosh Vem- pala. 2005. The spectral method for general mixture models. In Proceedings of Conference on Learning Theory.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "20 newsgroups data set",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ken Lang. 2007. 20 newsgroups data set.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the European Chapter of the Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mallet: A machine learning for language toolkit",
"authors": [
{
"first": "Andrew Kachites",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kachites McCallum. 2002. Mal- let: A machine learning for language toolkit. http://www.cs.umass.edu/ mccallum/mallet.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Estimating a dirichlet distribution",
"authors": [
{
"first": "P",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Minka",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas P. Minka. 2000. Estimating a dirichlet distribution. Technical report, Mi- crosoft. http://research.microsoft.com/en- us/um/people/minka/papers/dirichlet/.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Automatic evaluation of topic coherence",
"authors": [
{
"first": "David",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "Jey",
"middle": [
"Han"
],
"last": "Lau",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Grieser",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2010,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Newman, Jey Han Lau, Karl Grieser, and Timo- thy Baldwin. 2010. Automatic evaluation of topic coherence. In Conference of the North American Chapter of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Feature selection, l1 vs. l2 regularization, and rotational invariance",
"authors": [
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the International Conference of Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Y. Ng. 2004. Feature selection, l1 vs. l2 regu- larization, and rotational invariance. In Proceedings of the International Conference of Machine Learn- ing.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A twodimensional topic-aspect model for discovering multi-faceted topics",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
}
],
"year": 2010,
"venue": "Association for the Advancement of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Paul and Roxana Girju. 2010. A two- dimensional topic-aspect model for discovering multi-faceted topics. In Association for the Advance- ment of Artificial Intelligence.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Linguistic Inquiry and Word Count",
"authors": [
{
"first": "James",
"middle": [
"W"
],
"last": "Pennebaker",
"suffix": ""
},
{
"first": "Martha",
"middle": [
"E"
],
"last": "Francis",
"suffix": ""
}
],
"year": 1999,
"venue": "Lawrence Erlbaum",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James W. Pennebaker and Martha E. Francis. 1999. Linguistic Inquiry and Word Count. Lawrence Erl- baum, 1 edition, August.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A tutorial on hidden Markov models and selected applications in speech recognition",
"authors": [
{
"first": "Lawrence",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the IEEE",
"volume": "77",
"issue": "2",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257- 286.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "On l2-norm regularization and the Gaussian prior",
"authors": [
{
"first": "Jason",
"middle": [
"Rennie"
],
"last": "",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Rennie. 2003. On l2-norm regularization and the Gaussian prior.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "NIPS 1-12 Dataset",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Roweis",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Roweis. 2002. NIPS 1-12 Dataset.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The New York Times annotated corpus",
"authors": [
{
"first": "Evan",
"middle": [],
"last": "Sandhaus",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan Sandhaus. 2008. The New York Times annotated corpus.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A constructive definition of Dirichlet priors",
"authors": [
{
"first": "Jayaram",
"middle": [],
"last": "Sethuraman",
"suffix": ""
}
],
"year": 1994,
"venue": "Statistica Sinica",
"volume": "4",
"issue": "",
"pages": "639--650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jayaram Sethuraman. 1994. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639-650.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Regression shrinkage and selection via the lasso",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1994,
"venue": "Journal of the Royal Statistical Society, Series B",
"volume": "58",
"issue": "",
"pages": "267--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Tibshirani. 1994. Regression shrinkage and se- lection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267-288.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Dropout training as adaptive regularization",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Wager",
"suffix": ""
},
{
"first": "Sida",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "351--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Wager, Sida Wang, and Percy Liang. 2013. Dropout training as adaptive regularization. In Pro- ceedings of Advances in Neural Information Pro- cessing Systems, pages 351-359.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Rethinking LDA: Why priors matter",
"authors": [
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanna Wallach, David Mimno, and Andrew McCal- lum. 2009. Rethinking LDA: Why priors matter. In Proceedings of Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Efficient methods for topic model inference on streaming document collections",
"authors": [
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2009,
"venue": "Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Limin Yao, David Mimno, and Andrew McCallum. 2009. Efficient methods for topic model inference on streaming document collections. In Knowledge Discovery and Data Mining.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Mr. LDA: A flexible large scale topic modeling package using variational inference in mapreduce",
"authors": [
{
"first": "Ke",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Nima",
"middle": [],
"last": "Asadi",
"suffix": ""
},
{
"first": "Mohamad",
"middle": [],
"last": "Alkhouja",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ke Zhai, Jordan Boyd-Graber, Nima Asadi, and Mo- hamad Alkhouja. 2012. Mr. LDA: A flexible large scale topic modeling package using variational infer- ence in mapreduce. In Proceedings of World Wide Web Conference.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "Convergence of anchor coefficient C for anchor-beta. \u2206C is the difference of current C from the C at the previous iteration. C is converged within ten iterations for all three datasets.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"text": "The number of documents in the train, development, and test folds in our three datasets.",
"content": "<table/>"
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"text": "The performance on both HL and TI score indicate that the unregularized anchor algorithm is very sensitive to M . The M selected here is applied to subsequent models.",
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>0.02 0.03 0.04 0.05 0.06 0.07 0.055 0.060 0.065 0.10 TI\u2212i Score</td><td/><td/><td/><td/><td/><td/><td>20NEWS NIPS</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>0.07 0.08 0.09</td><td/><td/><td/><td/><td/><td/><td>NYT</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>0.06</td><td/><td>100</td><td/><td>300</td><td colspan=\"2\">500</td><td>700</td><td>900</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"5\">Document Frequency M</td></tr><tr><td colspan=\"19\">Figure 1: Grid search for document frequency M for our datasets with 20 topics (other configurations not</td></tr><tr><td colspan=\"8\">shown) on development data. Topics q 20 40 60 80</td><td/><td/><td/><td/><td/><td colspan=\"6\">Topics q 20 40 60 80</td></tr><tr><td/><td/><td/><td>Beta</td><td/><td/><td/><td>L2</td><td/><td/><td/><td/><td/><td/><td>Beta</td><td/><td/><td/><td>L2</td></tr><tr><td>\u2212410 \u2212405 \u2212400 \u2212395 \u2212390 \u22124800 \u22124750 \u22124700 \u22124650 \u2212920 \u2212910 \u2212900 \u2212890 \u2212880 HL Score</td><td>q q q</td><td colspan=\"2\">q q q q qq qqq q q q q q qq qqq q q q q q q qqq qq</td><td>q q q</td><td>q q q qq qq q q q q q qqq q q q q q qqqq</td><td>q q q</td><td>q q q q qqqqqq q q q q qqqq qq q q q q qqqqqq</td><td colspan=\"2\">q q q q qq qqq q q q q q qq qq q q q q qq qq q</td><td>20NEWS NIPS NYT</td><td>0.06 0.09 0.12 0.02 0.04 0.06 0.08 0.10 0.02 0.04 0.06 0.08 0.15 TI\u2212i Score</td><td>q q q</td><td colspan=\"2\">q q q q qqq qqq q q q q q qqqqq q q q q qqqqq q</td><td>q q q q q q q q q q q q qq q q q q q q qq q q q q q</td><td>q q q</td><td colspan=\"2\">q q q q qqqq qq q q q q qqq qqq q q q q q q q qqq</td><td>q q q</td><td>q q q q q q q q q qq qq qq q q q q qqqq q</td><td>NYT 20NEWS NIPS</td></tr><tr><td/><td colspan=\"2\">0 0.01</td><td colspan=\"6\">0.1 Regularization Weight \u03bb 0.5 1 0 0.01 0.1</td><td>0.5 1</td><td/><td/><td colspan=\"2\">0 0.01</td><td colspan=\"5\">0.1 Regularization Weight \u03bb 0.5 1 0 0.01 0.1</td><td>0.5 1</td></tr></table>"
},
"TABREF6": {
"num": null,
"html": null,
"type_str": "table",
"text": "Topics from anchor and anchor-beta with M = 100 on 20NEWS with 20 topics. Each topic is identified with its associated anchor word. When M = 100, the topics of anchor suffer: the four colored words appear in almost every topic. anchor-beta, in contrast, is less sensitive to suboptimal M .",
"content": "<table/>"
},
"TABREF8": {
"num": null,
"html": null,
"type_str": "table",
"text": "Comparing topics-labeled by their anchor word-from anchor and anchor-beta. With beta regularization, relevant words are promoted, while more general words are suppressed, improving topic coherence.",
"content": "<table/>"
}
}
}
}