| { |
| "paper_id": "E06-1014", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T10:34:28.800384Z" |
| }, |
| "title": "Improving Probabilistic Latent Semantic Analysis with Principal Component Analysis", |
| "authors": [ |
| { |
| "first": "Ayman", |
| "middle": [], |
| "last": "Farahat", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "ayman.farahat@gmail.com" |
| }, |
| { |
| "first": "Francine", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "chen@fxpal.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Probabilistic Latent Semantic Analysis (PLSA) models have been shown to provide a better model for capturing polysemy and synonymy than Latent Semantic Analysis (LSA). However, the parameters of a PLSA model are trained using the Expectation Maximization (EM) algorithm, and as a result, the trained model is dependent on the initialization values so that performance can be highly variable. In this paper we present a method for using LSA analysis to initialize a PLSA model. We also investigated the performance of our method for the tasks of text segmentation and retrieval on personal-size corpora, and present results demonstrating the efficacy of our proposed approach.", |
| "pdf_parse": { |
| "paper_id": "E06-1014", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Probabilistic Latent Semantic Analysis (PLSA) models have been shown to provide a better model for capturing polysemy and synonymy than Latent Semantic Analysis (LSA). However, the parameters of a PLSA model are trained using the Expectation Maximization (EM) algorithm, and as a result, the trained model is dependent on the initialization values so that performance can be highly variable. In this paper we present a method for using LSA analysis to initialize a PLSA model. We also investigated the performance of our method for the tasks of text segmentation and retrieval on personal-size corpora, and present results demonstrating the efficacy of our proposed approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In modeling a collection of documents for information access applications, the documents are often represented as a \"bag of words\", i.e., as term vectors composed of the terms and corresponding counts for each document. The term vectors for a document collection can be organized into a term by document co-occurrence matrix. When directly using these representations, synonyms and polysemous terms, that is, terms with multiple senses or meanings, are not handled well. Methods for smoothing the term distributions through the use of latent classes have been shown to improve the performance of a number of information access tasks, including retrieval over smaller collections (Deerwester et al., 1990 ), text segmentation (Brants et al., 2002) , and text classification (Wu and Gunopulos, 2002) .", |
| "cite_spans": [ |
| { |
| "start": 667, |
| "end": 703, |
| "text": "collections (Deerwester et al., 1990", |
| "ref_id": null |
| }, |
| { |
| "start": 725, |
| "end": 746, |
| "text": "(Brants et al., 2002)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 773, |
| "end": 797, |
| "text": "(Wu and Gunopulos, 2002)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The Probabilistic Latent Semantic Analysis model (PLSA) (Hofmann, 1999) provides a probabilistic framework that attempts to capture polysemy and synonymy in text for applications such as retrieval and segmentation. It uses a mixture decomposition to model the co-occurrence data, and the probabilities of words and documents are obtained by a convex combination of the aspects. The mixture approximation has a well defined probability distribution and the factors have a clear probabilistic meaning in terms of the mixture component distributions.", |
| "cite_spans": [ |
| { |
| "start": 56, |
| "end": 71, |
| "text": "(Hofmann, 1999)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The PLSA model computes the relevant probability distributions by selecting the model parameter values that maximize the probability of the observed data, i.e., the likelihood function. The standard method for maximum likelihood estimation is the Expectation Maximization (EM) algorithm. For a given initialization, the likelihood function increases with EM iterations until a local maximum is reached, rather than a global maximum, so that the quality of the solution depends on the initialization of the model. Additionally, the likelihood values across different initializations are not comparable, as we will show. Thus, the likelihood function computed over the training data cannot be used as a predictor of model performance across different models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Rather than trying to predict the best performing model from a set of models, in this paper we focus on finding a good way to initialize the PLSA model. We will present a framework for using Latent Semantic Analysis (LSA) (Deerwester et al., 1990) to better initialize the parameters of a corresponding PLSA model. The EM algorithm is then used to further refine the initial estimate. This combination of LSA and PLSA leverages the advantages of both. This paper is organized as follows: in section 2, we review related work in the area. In section 3, we summarize related work on LSA and its probabilistic interpretation. In section 4 we review the PLSA model and in section 5 we present our method for initializing a PLSA model using LSA model parameters. In section 6, we evaluate the performance of our framework on a text segmentation task and several smaller information retrieval tasks. And in section 7, we summarize our results and give directions for future work.", |
| "cite_spans": [ |
| { |
| "start": 222, |
| "end": 247, |
| "text": "(Deerwester et al., 1990)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A number of different methods have been proposed for handling the non-globally optimal solution when using EM. These include the use of Tempered EM (Hofmann, 1999) , combining models from different initializations in postprocessing (Hofmann, 1999; Brants et al., 2002) , and trying to find good initial values. For their segmentation task, Brants et al. (2002) found overfitting, which Tempered EM helps address, was not a problem and that early stopping of EM provided good performance and faster learning. Computing and combining different models is computationally expensive, so a method that reduces this cost is desirable. Different methods for initializing EM include the use of random initialization e.g., (Hofmann, 1999) , k-means clustering, and an initial cluster refinement algorithm (Fayyad et al., 1998) . K-means clustering is not a good fit to the PLSA model in several ways: it is sensitive to outliers, it is a hard clustering, and the relation of the identified clusters to the PLSA parameters is not well defined. In contrast to these other initialization methods, we know that the LSA reduces noise in the data and handles synonymy, and so should be a good initialization. The trick is in trying to relate the LSA parameters to the PLSA parameters.", |
| "cite_spans": [ |
| { |
| "start": 148, |
| "end": 163, |
| "text": "(Hofmann, 1999)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 232, |
| "end": 247, |
| "text": "(Hofmann, 1999;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 248, |
| "end": 268, |
| "text": "Brants et al., 2002)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 340, |
| "end": 360, |
| "text": "Brants et al. (2002)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 713, |
| "end": 728, |
| "text": "(Hofmann, 1999)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 795, |
| "end": 816, |
| "text": "(Fayyad et al., 1998)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "LSA is based on singular value decomposition (SVD) of a term by document matrix and retaining the top K singular values, mapping documents and terms to a new representation in a latent semantic space. It has been successfully applied in different domains including automatic indexing. Text similarity is better estimated in this low dimension space because synonyms are mapped to nearby locations and noise is reduced, although handling of polysemy is weak. In contrast, the PLSA model distributes the probability mass of a term over the different latent classes correspond-ing to different senses of a word, and thus better handles polysemy (Hofmann, 1999 ). The LSA model has two additional desirable features. First, the word document co-occurrence matrix can be weighted by any weight function that reflects the relative importance of individual words (e.g., tfidf). The weighting can therefore incorporate external knowledge into the model. Second, the SVD algorithm is guaranteed to produce the matrix of rank that minimizes the distance to the original word document co-occurrence matrix.", |
| "cite_spans": [ |
| { |
| "start": 642, |
| "end": 656, |
| "text": "(Hofmann, 1999", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As noted in Hofmann (1999) , an important difference between PLSA and LSA is the type of objective function utilized. In LSA, this is the L2 or Frobenius norm on the word document counts. In contrast, PLSA relies on maximizing the likelihood function, which is equivalent to minimizing the cross-entropy or Kullback-Leibler divergence between the empirical distribution and the predicted model distribution of terms in documents.", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 26, |
| "text": "Hofmann (1999)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A number of methods for deriving probabilities from LSA have been suggested. For example, Coccaro and Jurafsky (1998) proposed a method based on the cosine distance, and Tipping and Bishop (1999) give a probabilistic interpretation of principal component analysis that is formulated within a maximum-likelihood framework based on a specific form of Gaussian latent variable model. In contrast, we relate the LSA parameters to the PLSA model using a probabilistic interpretation of dimensionality reduction proposed by Ding (1999) that uses an exponential distribution to model the term and document distribution conditioned on the latent class.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 117, |
| "text": "Coccaro and Jurafsky (1998)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 170, |
| "end": 195, |
| "text": "Tipping and Bishop (1999)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 518, |
| "end": 529, |
| "text": "Ding (1999)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We briefly review the LSA model, as presented in Deerwester et al. (1990) , and then outline the LSA-based probability model presented in Ding (1999) .", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 73, |
| "text": "Deerwester et al. (1990)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 138, |
| "end": 149, |
| "text": "Ding (1999)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The term to document association is presented as a term-document matrix", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u00a1 \u00a3 \u00a2 \u00a4 \u00a5 \u00a6 \u00a7 \u00a9 \u00a9 \u00a7 \u00a9 . . . . . . . . . \u00a7 \u00a9 \u00a7 \u00a2 ! # \" % $ ' & ( \u00a2 \u00a4 \u00a5 \u00a6 )\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA", |
| "sec_num": "3" |
| }, |
| { |
| "text": ". . .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "(1) containing the frequency of the 2 index terms occurring in 3 documents. The frequency counts can also be weighted to reflect the relative importance of individual terms (e.g., Guo et al., (2003) . LSA represents terms and documents in a new vector space with smaller dimensions that minimize the distance between the projected terms and the original terms. This is done through the truncated (to rank ) singular value decomposition", |
| "cite_spans": [ |
| { |
| "start": 180, |
| "end": 198, |
| "text": "Guo et al., (2003)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") 1 0", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00a1 \u00a4 \u00a1 \u00a6 \u00a5 \u00a2 \u00a7 \u00a9 \u00a5 \u00a5 or explicitly \u00a1 \u00a5 \u00a2 \" ! \u00a5 & \u00a4 \u00a5 \u00a6 \" \u00a9 . . . \" \u00a4 \u00a5 \u00a6 # \" . . . # \u00a5 % $ (2) Among all 2 ' & 3 matrices of rank , \u00a1 \u00a6 \u00a5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") 1 0", |
| "sec_num": null |
| }, |
| { |
| "text": "is the one that minimizes the Frobenius norm", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") 1 0", |
| "sec_num": null |
| }, |
| { |
| "text": "( ) ( \u00a1 1 0 \u00a1 2 \u00a5 ( ) (3 4 $", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") 1 0", |
| "sec_num": null |
| }, |
| { |
| "text": "The LSA model based on SVD is a dimensionality reduction algorithm and as such does not have a probabilistic interpretation. However, under certain assumptions on the distribution of the input data, the SVD can be used to define a probability model. In this section, we summarize the results presented in Ding (1999) of a dual probability representation of LSA. Assuming the probability distribution of a document are the left eigenvectors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Probability Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\" 8 \u00a5 in the SVD of \u00a1 used in LSA: 9 ! \u00a1 ( \" ! \u00a5 & ( \u00a2 1 @ A C B E D \u00a2 FG 7 H E I Q P S R ! F T F T FR ! A C B E D U FG W V X I Y P ( \" $ a $ a $ 8 \u00a5 &", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "LSA-based Probability Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where`", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Probability Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\" ! \u00a5 &", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Probability Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "is a normalization constant. The dual formulation for the probability of term ) in terms of the tight eigenvectors (i.e., the document representations", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Probability Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "# \" # \u00a5 & of the matrix \u00a1 \u00a9 \u00a5 is: 9 ) 4 ( # \" # \u00a5 & ( \u00a2 @ A c b e d f Fg 7 H E I P R ! F T F T FR ! A c b e d Y Fg V I P # \" # \u00a5 & (4) where` # \" # \u00a5 & is a normalization constant. Ding also shows that \u00a1 is related to # \u00a1 by: \u00a1 \u00a2 h \" p i \u00a1 # \u00a1 & \u00a3 \u00a2 h 6 6", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "LSA-based Probability Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We will use Equations 3-5 in relating LSA to PLSA in section 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Probability Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The PLSA model (Hofmann, 1999 ) is a generative statistical latent class model: 1 , where", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 29, |
| "text": "(Hofmann, 1999", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLSA", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "9 t v ( q & 5 \u00a2 ' w y x 9 t u (r & 9 r s ( q & $", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "PLSA", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The joint probability between a word and document,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLSA", |
| "sec_num": "4" |
| }, |
| { |
| "text": "9 q 6 t &", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLSA", |
| "sec_num": "4" |
| }, |
| { |
| "text": ", is given by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLSA", |
| "sec_num": "4" |
| }, |
| { |
| "text": "9 q 6 t & \u00a2 9 q & 9 t v ( q & \u00a2 9 q & w x 9 t v (r & 9 r s ( q &", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLSA", |
| "sec_num": "4" |
| }, |
| { |
| "text": "and using Bayes' rule can be written as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLSA", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "9 q 6 t & \u00a2 w a x 9 t v (r & 9 q (r & 9 r & $", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "PLSA", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The likelihood function is given by ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLSA", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u00a2 w a w 3 q 6 t & p ) 9 q 6 t & $", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "PLSA", |
| "sec_num": "4" |
| }, |
| { |
| "text": "An important consideration in PLSA modeling is that the performance of the model is strongly affected by the initialization of the model prior to training. Thus a method for identifying a good initialization, or alternatively a good trained model, is needed. If the final likelihood value obtained after training was well correlated with accuracy, then one could train several PLSA models, each with a different initialization, and select the model with the largest likelihood as the best model. Although, for a given initialization, the likelihood increases to a locally optimal value with each iteration of EM, the final likelihoods obtained from different initializations after training do not correlate well with the accuracy of the corresponding models. This is shown in Table 1 , which presents correlation coefficients between likelihood values and either average or breakeven precision for several datasets with 64 or 256 latent classes, i.e., factors. Twenty random initializations were used per evaluation. Fifty iterations of EM per initialization were run, which empirically is more than enough to approach the optimal likelihood. The coefficients range from -0.64 to 0.25. The poor correlation indicates the need for a method to handle the variation in performance due to the influence of different initialization values, for example through better initialization methods. Hofmann (1999) and Brants (2002) averaged results from five and four random initializations, respectively, and empirically found this to improve performance. The combination of models enables redundancies in the models to minimize the expression of errors. We extend this approach by replacing one random initialization with one reasonably good initialization in the averaged models. We will empirically show that having at least one reasonably good initialization improves the performance over simply using a number of different initializations.", |
| "cite_spans": [ |
| { |
| "start": 1386, |
| "end": 1400, |
| "text": "Hofmann (1999)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1405, |
| "end": 1418, |
| "text": "Brants (2002)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 776, |
| "end": 783, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model Initialization and Performance", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The EM algorithm for estimating the parameters of the PLSA model is initialized with estimates of the model parameters . Hofmann (1999) relates the parameters of the PLSA model to an LSA model as follows: contain negative values and are not probability distributions. However, using equations 3 and 4, we can attach a probabilistic interpretation to LSA, and then relate \u00a7", |
| "cite_spans": [ |
| { |
| "start": 121, |
| "end": 135, |
| "text": "Hofmann (1999)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u00a2 \u00a7 \u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9 \u00a1 \u00a3 \u00a6 \u00a5 \u00a7 \u00a1 \u00a3 \u00a6 \u00a5 \u00a7 (13) \u00a7 \u00a2 \u00a1 \u00a4 \u00a3 \u00a5 \u00a7 \u00a2 q (r & & (14) \u00a2 \u00a1 \u00a4 \u00a3 \u00a5 \u00a7 \u00a2 t i (r & & i (15) \u00a9 \u00a2 q ! r & & $", |
| "eq_num": "(16)" |
| } |
| ], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "# \u00a1 \u00a3 \u00a6 \u00a5 \u00a7 and \u00a2 \u00a1 \u00a3 \u00a6 \u00a5 \u00a7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "with the corresponding LSA matrices. We now outline this relation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Equation 4 represents the probability of occurrence of term as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "9 ) S \u00a1 ( # \" $ a $ a $ # \u00a5 & \u00a2 9 # \" $ a $ a $ # \u00a5 ( ) S \u00a1 & 9 ) S \u00a1 & 9 # \" $ a $ a $ # \u00a5 & \u00a2 9 ) S \u00a1 & 9 # \" ( ) U \u00a1 & 9 # \u00a5 ( ) U \u00a1 & 9 # \" 1 & 9 # \u00a5 & \u00a2 9 \u00a9 ) ( ) S \u00a1 & 0 \u00a1 1 \u00a9 9 ) S \u00a1 ( # 3 2 & $", |
| "eq_num": "(17)" |
| } |
| ], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "And using Equation (4) we get: . We make the simplifying assumption that", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "9 \u00a9 ) ( ) S \u00a1 & 0 \u00a1 1 \u00a9 9 ) S \u00a1 ( # 2 & 5 \u00a2 5 4 \u00a1 1 \u00a9 @ A c b D Fg 7 6IP # \" $ a $ a $ # \u00a5 &", |
| "eq_num": "(18)" |
| } |
| ], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "9 ) \u00a2 \u00a1 &", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "is constant across terms and normalize the exponential term to a probability: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "9 ) 4 ( # \u00a1 & ( \u00a2 @ A c b d Fg \u00a2 D I P 1 \u00a9 @ A c b V", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "9 q (r i & \u00a4 @ A C B d Y FG D IP 1 \u00a9 @ A C B V FG D IP", |
| "eq_num": "(20)" |
| } |
| ], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The singular values, In our LSA-initialized PLSA model, we initialize the PLSA model parameters using Equations 19-21. The EM algorithm is then used beginning with the E-step as outlined in Equations 9-12.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSA-based Initialization of PLSA", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In this section we evaluate the performance of LSA-initialized PLSA (LSA-PLSA). We compare the performance of LSA-PLSA to LSA only and PLSA only, and also compare its use in combination with other models. We give results for a smaller information retrieval application and a text segmentation application, tasks where the reduced dimensional representation has been successfully used to improve performance over simpler word count models such as tf-idf.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "To test our approach for PLSA initialization we developed an LSA implementation based on the SVDLIBC package (http://tedlab.mit.edu/\u00a2 dr/SVDLIBC/) for computing the singular values of sparse matrices. The PLSA implementation was based on an earlier implementation by Brants et al. (2002) . For each of the corpora, we tokenized the documents and used the LinguistX morphological analyzer to stem the terms. We used entropy weights (Guo et al., 2003) to weight the terms in the document matrix.", |
| "cite_spans": [ |
| { |
| "start": 267, |
| "end": 287, |
| "text": "Brants et al. (2002)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 431, |
| "end": 449, |
| "text": "(Guo et al., 2003)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Description", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We compared the performance of the LSA-PLSA model against randomly-initialized PLSA and against LSA for four different retrieval tasks. In these tasks, the retrieval is over a smaller corpus, on the order of a personal document collection. We used the following four standard document collections: (i) MED (1033 document abstracts from the National Library of Medicine), (ii) CRAN (1400 documents from the Cranfield Institute of Technology), (iii) CISI (1460 abstracts in library science from the Institute for Scientific Information) and (iv) CACM (3204 documents from the association for computing machinery). For each of these document collections, we computed the LSA, PLSA, and LSA-PLSA representations of both the document collection and the queries for a range of latent classes, or factors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Information Retrieval", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "For each data set, we used the computed representations to estimate the similarity of each query to all the documents in the original collection. For the LSA model, we estimated the similarity using the cosine distance between the reduced dimensional representations of the query and the candidate document. For the PLSA and LSA-PLSA models, we first computed the probability of each word occurring in the document, We evaluated the retrieval results (at the 11 standard recall levels as well as the average precision and break-even precision) using manually tagged relevance. Figure 1 shows the average precision as a function of the number of latent classes for the CACM collection, the largest of the datasets. The LSA-PLSA model performance was better than both the LSA performance and the PLSA performance at all class sizes. This same general trend was observed for the CISI dataset. For the two smallest datasets, the LSA-PLSA model performed better than the randomly-initialized PLSA model at all class sizes; it performed better than the LSA model at the larger classes sizes where the best performance is obtained. In Table 2 the performance for each model using the optimal number of latent classes is shown. The results show that LSA-PLSA outperforms LSA on 7 out of 8 evaluations. LSA-PLSA outperforms both random and k-means initialization of PLSA in all evaluations. In addition, performance using random initialization was never worse than kmeans initialization, which itself is sensitive to initialization values. Thus in the rest of our experiments we initialized PLSA models using the simpler random-initialization instead of k-means initialization. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 577, |
| "end": 585, |
| "text": "Figure 1", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 1128, |
| "end": 1135, |
| "text": "Table 2", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Information Retrieval", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We explored the use of an LSA-PLSA model when averaging the similarity scores from multiple models for ranking in retrieval. We compared a baseline of 4 randomly-initialized PLSA models against 2 averaged models that contain an LSA-PLSA model: 1) 1 LSA, 1 PLSA, and 1 LSA-PLSA model and 2) 1 LSA-PLSA with 3 PLSA models. We also compared these models against the performance of an averaged model without an LSA-PLSA model: 1 LSA and 1 PLSA model. In each case, the PLSA models were randomly initialized. Figure 2 shows the average precision as a function of the number of latent classes for the CISI collection using multiple models. In all class sizes, a combined model that included the LSAinitialized PLSA model had performance that was at least as good as using 4 PLSA models. This was also true for the CRAN dataset. For the other two datasets, the performance of the combined model was always better than the performance of 4 PLSA models when the number of factors was no more than 200-300, the region where the best performance was observed. Table 3 summarizes the results and gives the best performing model for each task. Comparing Tables 2 and 3, note that the use of multiple models improved retrieval results. Table 3 also indicates that combining 1 LSA, 1 PLSA and 1 LSA-PLSA models outperformed the combination of 4 PLSA models in 7 out of 8 evaluations. For our data, the time to compute the LSA model is approximately 60% of the time to compute a PLSA model. The running time of the \"LSA PLSA LSA-PLSA\" model requires computing 1 LSA and 2 PLSA models, in contrast to 4 models for the 4PLSA model, therefore requiring less than 75% of the running time of the 4PLSA model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 504, |
| "end": 512, |
| "text": "Figure 2", |
| "ref_id": "FIGREF6" |
| }, |
| { |
| "start": 1049, |
| "end": 1056, |
| "text": "Table 3", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 1222, |
| "end": 1229, |
| "text": "Table 3", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Multiple Models", |
| "sec_num": "6.2.2" |
| }, |
| { |
| "text": "A number of researchers, (e.g., Li and Yamanishi (2000) ; Hearst (1997) We compared the use of different initializations on 500 documents created from Reuters-21578, in a manner similar to Li and Yamanishi (2000) . The performance is measured using error probability at the word and sentence level (Beeferman et al., 1997) , S and \u00a5 , respectively. This measure allows for close matches in segment boundaries. Specifically, the boundaries must be within words/sentences, where is set to be half the av- erage segment length in the test data. In order to account for the random initial values of the PLSA models, we performed the whole set of experiments for each parameter setting four times and averaged the results.", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 55, |
| "text": "Li and Yamanishi (2000)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 58, |
| "end": 71, |
| "text": "Hearst (1997)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 189, |
| "end": 212, |
| "text": "Li and Yamanishi (2000)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 298, |
| "end": 322, |
| "text": "(Beeferman et al., 1997)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Segmentation", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We compared the segmentation performance using an LSA-PLSA model against the randomlyinitialized PLSA models used by Brants et al. (2002) . Table 4 presents the performance over different classes sizes for the two models. Comparing performance at the optimum class size for each model, the results in Table 4 show that the LSA-PLSA model outperforms PLSA on both word and sentence error rate. ", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 137, |
| "text": "Brants et al. (2002)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 140, |
| "end": 147, |
| "text": "Table 4", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 301, |
| "end": 308, |
| "text": "Table 4", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Single Models for Segmentation", |
| "sec_num": "6.3.1" |
| }, |
| { |
| "text": "We explored the use of an LSA-PLSA model when averaging multiple PLSA models to reduce the effect of poor model initialization. In particular, the adjacent block similarity from multiple models was averaged and used in the dip computations. For simplicity, we fixed the class size of the individual models to be the same for a particular combined model and then computed performance over a range of class sizes. We compared a baseline of four randomly initialized PLSA models against two averaged models that contain an LSA-PLSA model: 1) one LSA-PLSA with two PLSA models and 2) one LSA-PLSA with three PLSA models. The best results were achieved using a combination of PLSA and LSA-PLSA models (see Table 5 ). And all multiple model combinations performed better than a single model (compare Tables 4 and 5), as expected.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 701, |
| "end": 708, |
| "text": "Table 5", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Multiple Models for Segmentation", |
| "sec_num": "6.3.2" |
| }, |
| { |
| "text": "In terms of computational costs, it is less costly to compute one LSA-PLSA model and two PLSA models than to compute four PLSA models. In addition, the LSA-initialized models tend to perform best with a smaller number of latent variables than the number of latent variables needed for the four PLSA model, also reducing the computational cost.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multiple Models for Segmentation", |
| "sec_num": "6.3.2" |
| }, |
| { |
| "text": "We have presented LSA-PLSA, an approach for improving the performance of PLSA by leveraging the best features of PLSA and LSA. Our approach uses LSA to initialize a PLSA model, allowing for arbitrary weighting schemes to be incorporated into a PLSA model while leveraging the optimization used to improve the estimate of the PLSA parameters. We have evaluated the proposed framework on two tasks: personalsize information retrieval and text segmentation. The LSA-PLSA model outperformed PLSA on all tasks. And in all cases, combining PLSA-based models outperformed a single model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The best performance was obtained with combined models when one of the models was the LSA-PLSA model. When combining multiple PLSA models, the use of LSA-PLSA in combination with either two PLSA models or one PLSA and one LSA model improved performance while reducing the running time over the combination of four or more PLSA models as used by others.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Future areas of investigation include quantifying the expected performance of the LSAinitialized PLSA model by comparing performance to that of the empirically best performing model and examining whether tempered EM could further improve performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "&based on the statistics of word occurrences in each cluster. We iterated over the", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "is uniform. This gives us a PLSA-smoothed term representation of each document. We then computed the Hellinger similarity (Basu et al., 1997) . In all of the evaluations, the results for the PLSA model were averaged over four different runs to account for the dependence on the initial conditions.", |
| "cite_spans": [ |
| { |
| "start": 122, |
| "end": 141, |
| "text": "(Basu et al., 1997)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| }, |
| { |
| "text": "In addition to LSA-based initialization of the PLSA model, we also investigated initializing the PLSA model by first running the \"k-means\" algorithm to cluster the documents into classes, where is the number of latent classes and then initializing 9 t u (r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Single Models", |
| "sec_num": "6.2.1" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Minimum distance estimation: The approach using density-based distances", |
| "authors": [ |
| { |
| "first": "Ayanendranath", |
| "middle": [], |
| "last": "Basu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [ |
| "R" |
| ], |
| "last": "Harris", |
| "suffix": "" |
| }, |
| { |
| "first": "Srabashi", |
| "middle": [], |
| "last": "Basu", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Handbook of Statistics", |
| "volume": "15", |
| "issue": "", |
| "pages": "21--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ayanendranath Basu, Ian R. Harris, and Srabashi Basu. 1997. Minimum distance estimation: The approach using density-based distances. In G. S. Maddala and C. R. Rao, editors, Handbook of Statistics, vol- ume 15, pages 21-48. North-Holland.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Statistical models for text segmentation. Machine Learning", |
| "authors": [ |
| { |
| "first": "Doug", |
| "middle": [], |
| "last": "Beeferman", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Berger", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "177--210", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Doug Beeferman, Adam Berger, and John Lafferty. 1997. Statistical models for text segmentation. Ma- chine Learning, (34):177-210.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Topic-based document segmentation with probabilistic latent semantic analysis", |
| "authors": [ |
| { |
| "first": "Thorsten", |
| "middle": [], |
| "last": "Brants", |
| "suffix": "" |
| }, |
| { |
| "first": "Francine", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Ioannis", |
| "middle": [], |
| "last": "Tsochantaridis", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of Conference on Information and Knowledge Management", |
| "volume": "", |
| "issue": "", |
| "pages": "211--218", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thorsten Brants, Francine Chen, and Ioannis Tsochan- taridis. 2002. Topic-based document segmentation with probabilistic latent semantic analysis. In Pro- ceedings of Conference on Information and Knowl- edge Management, pages 211-218.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Towards better integration of semantic predictors in statistical language modeling", |
| "authors": [ |
| { |
| "first": "Noah", |
| "middle": [], |
| "last": "Coccaro", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of ICSLP-98", |
| "volume": "6", |
| "issue": "", |
| "pages": "2403--2406", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noah Coccaro and Daniel Jurafsky. 1998. Towards better integration of semantic predictors in statistical language modeling. In Proceedings of ICSLP-98, volume 6, pages 2403-2406.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Indexing by latent semantic analysis", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Scott", |
| "suffix": "" |
| }, |
| { |
| "first": "Susan", |
| "middle": [ |
| "T" |
| ], |
| "last": "Deerwester", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [ |
| "K" |
| ], |
| "last": "Dumais", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [ |
| "W" |
| ], |
| "last": "Landauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [ |
| "A" |
| ], |
| "last": "Furnas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Harshman", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Journal of the American Society of Information Science", |
| "volume": "41", |
| "issue": "6", |
| "pages": "391--407", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Scott C. Deerwester, Susan T. Dumais, Thomas K. Lan- dauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of the American Society of Information Science, 41(6):391-407.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A similarity-based probability model for latent semantic indexing", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "Q" |
| ], |
| "last": "Chris", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of SIGIR-99", |
| "volume": "", |
| "issue": "", |
| "pages": "58--65", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris H. Q. Ding. 1999. A similarity-based probability model for latent semantic indexing. In Proceedings of SIGIR-99, pages 58-65.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Initialization of iterative refi nement clustering algorithms", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Usama", |
| "suffix": "" |
| }, |
| { |
| "first": "Cory", |
| "middle": [], |
| "last": "Fayyad", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [ |
| "S" |
| ], |
| "last": "Reina", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bradley", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Knowledge Discovery and Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "194--198", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Usama M. Fayyad, Cory Reina, and Paul S. Bradley. 1998. Initialization of iterative refi nement cluster- ing algorithms. In Knowledge Discovery and Data Mining, pages 194-198.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Knowledge-enhanced latent semantic indexing", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Berry", |
| "suffix": "" |
| }, |
| { |
| "first": "Bryan", |
| "middle": [], |
| "last": "Thompson", |
| "suffix": "" |
| }, |
| { |
| "first": "Sidney", |
| "middle": [], |
| "last": "Balin", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Information Retrieval", |
| "volume": "6", |
| "issue": "2", |
| "pages": "225--250", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Guo, Michael Berry, Bryan Thompson, and Sid- ney Balin. 2003. Knowledge-enhanced latent se- mantic indexing. Information Retrieval, 6(2):225- 250.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Texttiling: Segmenting text into multi-paragraph subtopic passages", |
| "authors": [ |
| { |
| "first": "Marti", |
| "middle": [ |
| "A" |
| ], |
| "last": "Hearst", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Computational Linguistics", |
| "volume": "23", |
| "issue": "1", |
| "pages": "33--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marti A. Hearst. 1997. Texttiling: Segmenting text into multi-paragraph subtopic passages. Computa- tional Linguistics, 23(1):33-64.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Probabilistic latent semantic indexing", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Hofmann", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of SIGIR-99", |
| "volume": "", |
| "issue": "", |
| "pages": "35--44", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of SIGIR-99, pages 35-44.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Topic analysis using a fi nite mixture model", |
| "authors": [ |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Yamanishi", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "35--44", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hang Li and Kenji Yamanishi. 2000. Topic analysis using a fi nite mixture model. In Proceedings of Joint SIGDAT Conference on Empirical Methods in Nat- ural Language Processing and Very Large Corpora, pages 35-44.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Probabilistic principal component analysis", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Tipping", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Bishop", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Journal of the Royal Statistical Society, Series B", |
| "volume": "61", |
| "issue": "3", |
| "pages": "611--622", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Tipping and Christopher Bishop. 1999. Prob- abilistic principal component analysis. Journal of the Royal Statistical Society, Series B, 61(3):611- 622.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Evaluating the utility of statistical phrases and latent semantic indexing for text classifi cation", |
| "authors": [ |
| { |
| "first": "Huiwen", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Dimitrios", |
| "middle": [], |
| "last": "Gunopulos", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of IEEE International Conference on Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "713--716", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Huiwen Wu and Dimitrios Gunopulos. 2002. Evaluat- ing the utility of statistical phrases and latent seman- tic indexing for text classifi cation. In Proceedings of IEEE International Conference on Data Mining, pages 713-716.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "text": "the analysis above, we assume that the latent classes in the LSA model correspond to the latent classes of the PLSA model. Making the simplifying assumption that the latent classes of the LSA model are conditionally independent on term", |
| "uris": null |
| }, |
| "FIGREF5": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Average Precision on CACM Data set", |
| "uris": null |
| }, |
| "FIGREF6": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Average Precision on CISI using Multiple Models", |
| "uris": null |
| }, |
| "FIGREF7": { |
| "num": null, |
| "type_str": "figure", |
| "text": "), have developed text segmentation systems.Brants et. al. (2002) developed a system for text segmentation based on a PLSA model of similarity. The text is divided into overlapping blocks of sentences and the PLSA representation of the terms in each block, similarity measure. The positions of the largest local minima, or dips, in the sequence of block pair similarity values are emitted as segmentation points.", |
| "uris": null |
| }, |
| "TABREF1": { |
| "text": "Hofmann (1999) uses the EM algorithm to compute optimal parameters. The E-step is given by", |
| "html": null, |
| "content": "<table><tr><td>9 r s ( q 6 t</td><td>&</td><td>\u00a2</td><td>9 S x 9</td><td>r & 9 r & q 9</td><td>(r q</td><td>&</td><td>9 (r 7</td><td>t u (r & 9 t & u (r</td><td>&</td><td>(9)</td></tr><tr><td colspan=\"5\">and the M-step is given by</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"3\">9 t v (r 9 q & (r & 9 r & \u00a2 \u00a2 \u00a2</td><td/><td colspan=\"6\">3 3 3 3 3 q 6 q 6 t t & q 6 t & 9 9 & r s ( q 6 t 9 r s ( q 6 t & r s ( q 6 & t q 6 t & 9 r s ( q 6 t q 6 t & 9 r ( q 6 & t 3 q 6 t & $ & &</td><td>(10) (11)</td></tr></table>", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF2": { |
| "text": "Correlation between the negative loglikelihood and Average or BreakEven Precision", |
| "html": null, |
| "content": "<table><tr><td>Data</td><td># Factors</td><td colspan=\"2\">Average BreakEven</td></tr><tr><td/><td/><td>Precision</td><td>Precision</td></tr><tr><td>Med</td><td>64</td><td>-0.47</td><td>-0.41</td></tr><tr><td>Med</td><td>256</td><td>-0.15</td><td>0.25</td></tr><tr><td>CISI</td><td>64</td><td>-0.20</td><td>-0.20</td></tr><tr><td>CISI</td><td>256</td><td>-0.12</td><td>-0.16</td></tr><tr><td>CRAN</td><td>64</td><td>0.03</td><td>0.16</td></tr><tr><td>CRAN</td><td>256</td><td>-0.15</td><td>0.14</td></tr><tr><td>CACM</td><td>64</td><td>-0.64</td><td>0.08</td></tr><tr><td>CACM</td><td>256</td><td>-0.22</td><td>-0.12</td></tr></table>", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF3": { |
| "text": "Comparing with Equation 2, the LSA factors,", |
| "html": null, |
| "content": "<table><tr><td>4 and # \u00a1 correspond to the factors 9 q & of the PLSA model and the mixing propor-9 t & and u (r (r tions of the latent classes in PLSA, 9 & , corre-r spond to the singular values of the SVD in LSA.</td></tr><tr><td>Note that we can not directly identify the matrix \u00a7 with \u00a7 \" \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 and with \" \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 since both \u00a7 \u00a6 \u00a5 \u00a7 and \u00a9 \u00a5</td></tr></table>", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF6": { |
| "text": "Retrieval Evaluation with Single Models. Best performing model for each dataset/metric is in bold.", |
| "html": null, |
| "content": "<table><tr><td>Data</td><td colspan=\"4\">Met. LSA PLSA LSA-kmeans-</td></tr><tr><td/><td/><td/><td>PLSA</td><td>PLSA</td></tr><tr><td>Med</td><td>Avg. 0.55</td><td>0.38</td><td>0.52</td><td>0.37</td></tr><tr><td>Med</td><td>Brk. 0.53</td><td>0.39</td><td>0.54</td><td>0.39</td></tr><tr><td>CISI</td><td>Avg. 0.09</td><td>0.12</td><td>0.14</td><td>0.12</td></tr><tr><td>CISI</td><td>Brk. 0.11</td><td>0.15</td><td>0.17</td><td>0.15</td></tr><tr><td colspan=\"2\">CACM Avg. 0.13</td><td>0.21</td><td>0.25</td><td>0.19</td></tr><tr><td colspan=\"2\">CACM Brk. 0.15</td><td>0.24</td><td>0.28</td><td>0.22</td></tr><tr><td colspan=\"2\">CRAN Avg. 0.28</td><td>0.30</td><td>0.32</td><td>0.23</td></tr><tr><td colspan=\"2\">CRAN Brk. 0.28</td><td>0.29</td><td>0.31</td><td>0.23</td></tr></table>", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF7": { |
| "text": "Retrieval Evaluation with Multiple Models. Best performing model for each dataset and metric are in bold. L-PLSA corresponds to LSA-PLSA", |
| "html": null, |
| "content": "<table><tr><td>Data</td><td colspan=\"2\">Met 4PLSA</td><td>LSA</td><td>LSA</td><td>L-PLSA</td></tr><tr><td>Set</td><td/><td/><td>PLSA</td><td>PLSA</td><td>3PLSA</td></tr><tr><td/><td/><td/><td>L-PLSA</td><td/><td/></tr><tr><td>Med</td><td>Avg</td><td>0.55</td><td>0.620</td><td>0.567</td><td>0.584</td></tr><tr><td>Med</td><td>Brk</td><td>0.53</td><td>0.575</td><td>0.545</td><td>0.561</td></tr><tr><td>CISI</td><td>Avg</td><td>0.152</td><td>0.163</td><td>0.152</td><td>0.155</td></tr><tr><td>CISI</td><td>Brk</td><td>0.18</td><td>0.197</td><td>0.187</td><td>0.182</td></tr><tr><td colspan=\"2\">CACM Avg</td><td>0.278</td><td>0.279</td><td>0.249</td><td>0.276</td></tr><tr><td colspan=\"2\">CACM Brk</td><td>0.299</td><td>0.296</td><td>0.275</td><td>0.31</td></tr><tr><td colspan=\"2\">CRAN Avg</td><td>0.377</td><td>0.39</td><td>0.365</td><td>0.39</td></tr><tr><td colspan=\"2\">CRAN Brk</td><td>0.358</td><td>0.368</td><td>0.34</td><td>0.37</td></tr></table>", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF8": { |
| "text": "Single Model Segmentation Word and Sentence Error Rates (%). PLSA error rate at the optimal number of classes in terms of S is in italic. Best performing model is in bold without italic.", |
| "html": null, |
| "content": "<table><tr><td>Num Classes</td><td colspan=\"2\">LSA-PLSA</td><td>PLSA</td></tr><tr><td>64</td><td colspan=\"3\">\u00a3 \u00a5 \u00a4 \u00a6 2.14 2.54 3.19 3.51 \u00a3 \u00a5 \u00a7 \u00a6 \u00a3 \u00a5 \u00a4 \u00a6 \u00a3 \u00a5 \u00a7 \u00a6</td></tr><tr><td>100</td><td colspan=\"3\">2.31 2.65 2.94 3.35</td></tr><tr><td>128</td><td>2.05</td><td colspan=\"2\">2.57 2.73 3.13</td></tr><tr><td>140</td><td colspan=\"3\">2.40 2.69 2.72 3.18</td></tr><tr><td>150</td><td colspan=\"3\">2.35 2.73 2.91 3.27</td></tr><tr><td>256</td><td colspan=\"3\">2.99 3.56 2.87 3.24</td></tr><tr><td>1024</td><td colspan=\"3\">3.72 4.11 3.19 3.51</td></tr><tr><td>2048</td><td colspan=\"3\">2.72 2.99 3.23 3.64</td></tr></table>", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF9": { |
| "text": "Multiple Model Segmentation Word and Sentence Error Rates (%). Performance at the optimal number of classes in terms of S is in italic. Best performing model is in bold without italic.", |
| "html": null, |
| "content": "<table><tr><td>Num</td><td>4PLSA</td><td>LSA-PLSA</td><td colspan=\"2\">LSA-PLSA</td></tr><tr><td>Class 64</td><td colspan=\"4\">2PLSA \u00a7 \u00a6 \u00a3 \u00a4 \u00a6 \u00a3 \u00a3 \u00a4 \u00a6 \u00a3 \u00a7 \u00a6 \u00a3 3PLSA \u00a4 \u00a6 \u00a3 \u00a7 \u00a6 2.67 2.93 2.01 2.24 1.59 1.78</td></tr><tr><td>100</td><td colspan=\"3\">2.35 2.65 1.59 1.83 1.37</td><td>1.62</td></tr><tr><td>128</td><td colspan=\"4\">2.43 2.85 1.99 2.37 1.57 1.88</td></tr><tr><td>140</td><td colspan=\"4\">2.04 2.39 1.66 1.90 1.77 2.07</td></tr><tr><td>150</td><td colspan=\"4\">2.41 2.73 1.96 2.21 1.86 2.12</td></tr><tr><td>256</td><td colspan=\"4\">2.32 2.62 1.78 1.98 1.82 1.98</td></tr><tr><td colspan=\"5\">1024 1.85 2.25 2.51 2.95 2.36 2.77</td></tr><tr><td colspan=\"5\">2048 2.88 3.27 2.73 3.06 2.61 2.86</td></tr></table>", |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |