ACL-OCL / Base_JSON /prefixC /json /C08 /C08-1004.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C08-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:24:35.264113Z"
},
"title": "An Improved Hierarchical Bayesian Model of Language for Document Classification",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Allison",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "ben@dcs.shef.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper addresses the fundamental problem of document classification, and we focus attention on classification problems where the classes are mutually exclusive. In the course of the paper we advocate an approximate sampling distribution for word counts in documents, and demonstrate the model's capacity to outperform both the simple multinomial and more recently proposed extensions on the classification task. We also compare the classifiers to a linear SVM, and show that provided certain conditions are met, the new model allows performance which exceeds that of the SVM and attains amongst the very best published results on the Newsgroups classification task.",
"pdf_parse": {
"paper_id": "C08-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper addresses the fundamental problem of document classification, and we focus attention on classification problems where the classes are mutually exclusive. In the course of the paper we advocate an approximate sampling distribution for word counts in documents, and demonstrate the model's capacity to outperform both the simple multinomial and more recently proposed extensions on the classification task. We also compare the classifiers to a linear SVM, and show that provided certain conditions are met, the new model allows performance which exceeds that of the SVM and attains amongst the very best published results on the Newsgroups classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Document classification is one of the key technologies in the emerging digital world: as the amount of textual information existing in electronic form increases exponentially, reliable automatic methods to sift through the haystack and pluck out the occasional needle are almost a necessity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous comparative studies of different classifiers (for example, (Yang and Liu, 1999; Joachims, 1998; Rennie et al., 2003; Dumais et al., 1998) ) have consistently shown linear Support Vector Machines to be the most appropriate method. Generative probabilistic classifiers, often represented by the multinomial classifier, have in these same studies performed poorly, and this empirical evidence has been bolstered by theoretical arguments (Lasserre et al., 2006) .",
"cite_spans": [
{
"start": 68,
"end": 88,
"text": "(Yang and Liu, 1999;",
"ref_id": "BIBREF20"
},
{
"start": 89,
"end": 104,
"text": "Joachims, 1998;",
"ref_id": "BIBREF8"
},
{
"start": 105,
"end": 125,
"text": "Rennie et al., 2003;",
"ref_id": "BIBREF19"
},
{
"start": 126,
"end": 146,
"text": "Dumais et al., 1998)",
"ref_id": "BIBREF3"
},
{
"start": 443,
"end": 466,
"text": "(Lasserre et al., 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we revisit the theme of generative classifiers for mutually exclusive classification problems, but consider classifiers employing more complex models of language; as a starting point we consider recent work (Madsen et al., 2005) which relaxes some of the multinomial assumptions. We continue and expand upon the theme of that work, but identify some weaknesses both in its theoretical motivations and practical applications. We demonstrate a new approximate model which overcomes some of these concerns, and demonstrate substantial improvements that such a model achieves on four classification tasks, three of which are standard and one of which is a newly created task. We also show the new model to be highly competitive to an SVM where the previous models are not.",
"cite_spans": [
{
"start": 221,
"end": 242,
"text": "(Madsen et al., 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u00a72 of the paper describes previous work which has sought a probabilistic model of language and its application to document classification. \u00a73 describes the models we consider in this paper, and gives details of parameter estimation. \u00a74 describes our evaluation of the models, and \u00a75 presents the results of this evaluation. \u00a76 explores reasons for the observed results, and finally \u00a77 ends with some concluding remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The problem of finding an appropriate and tractable model for language is one which has been studied in many different areas. In many cases, the first (and often only) model is one in which counts of words are modelled as binomial-or Poissondistributed random variables. However, the use of such distributions entails an implicit assumption that the occurrence of words is the result of a fixed number of independent trials-draws from a \"bag of words\"-where on each trial the probability of success is constant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several authors, among them (Church and Gale, 1995; Katz, 1996) , observe empirically such models are not always accurate predictors of actual word behaviour. This moves them to suggest distributions for word counts where the underlying probability varies between documents; thus the expected behaviour of a word in a new document is a combination of predictions for all possible probabilities. Other authors (Jansche, 2003; Eyheramendy et al., 2003; Lowe, 1999) use these same ideas to classify documents on the basis of subsets of vocabulary, in the first and third cases with encouraging results using small subsets (in the second case, the performance of the model is shown to be poor compared to the multinomial).",
"cite_spans": [
{
"start": 28,
"end": 51,
"text": "(Church and Gale, 1995;",
"ref_id": null
},
{
"start": 52,
"end": 63,
"text": "Katz, 1996)",
"ref_id": "BIBREF10"
},
{
"start": 409,
"end": 424,
"text": "(Jansche, 2003;",
"ref_id": "BIBREF7"
},
{
"start": 425,
"end": 450,
"text": "Eyheramendy et al., 2003;",
"ref_id": "BIBREF5"
},
{
"start": 451,
"end": 462,
"text": "Lowe, 1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "When one moves to consider counts of all words in some vocabulary, the proper distribution of the whole vector of word counts is multinomial. (Madsen et al., 2005) apply the same idea as for the single word (binomial) case to the multinomial, using the most convenient form of distribution to represent the way the vector of multinomial probabilities varies between documents, and report encouraging results compared to the simple multinomial. However, we show that the use of the most mathematically convenient distribution to describe the way the vector of probabilities varies entails some unwarranted and undesirable assumptions. This paper will first describe those assumptions, and then describe an approximate technique for overcoming the assumptions. We show that, combined with some alterations to estimation, the models lead to a classifier able to outperform both the multinomial classifier and a linear SVM.",
"cite_spans": [
{
"start": 142,
"end": 163,
"text": "(Madsen et al., 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we briefly describe the use of a generative model of language as applied to the problem of document classification, and also how we estimate all relevant parameters for the work which follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models of Language for Document Classification",
"sec_num": "3"
},
{
"text": "In terms of notation, we usec to represent a random variable and c to represent an outcome. We use roman letters for observed or observable quantities and greek letters for unobservables (i.e. parameters). We writec \u223c \u03d5(c) to mean thatc has probability density (discrete or continuous) \u03d5(c), and write p(c) as shorthand for p(c = c). Finally, we make no explicit distinction in notation between univariate and multivariate quantities; however, we use \u03b8 j to refer to the j-th component of the vector \u03b8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models of Language for Document Classification",
"sec_num": "3"
},
{
"text": "We consider documents to be represented as vectors of count-valued random variables such",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models of Language for Document Classification",
"sec_num": "3"
},
{
"text": "that d = {d 1 ...d v }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models of Language for Document Classification",
"sec_num": "3"
},
{
"text": "For classification, interest centres on the conditional distribution of the class variable, given such a document. Where documents are to be assigned to one class only (as in the case of this paper), this class is judged to be the most probable class. For generative classifiers such as those considered here, the posterior distribution of interest is modelled from the joint distribution of class and document; thus ifc is a variable representing class andd is a vector of word counts, then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models of Language for Document Classification",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(c|d) \u221d p(c) \u2022 p(d|c)",
"eq_num": "(1)"
}
],
"section": "Probabilistic Models of Language for Document Classification",
"sec_num": "3"
},
{
"text": "For the purposes of this work we also assume a uniform prior onc, meaning the ultimate decision is on the basis of the document alone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models of Language for Document Classification",
"sec_num": "3"
},
{
"text": "A natural way to model the distribution of counts is to let p(d|c) be distributed multinomially, as proposed in (Guthrie et al., 1994; McCallum and Nigam, 1998) amongst others. The multinomial model assumes that documents are the result of repeated trials, where on each trial a word is selected at random, and the probability of selecting the j-th word from class c is \u03b8 cj . However, in general we will not use the subscript c -we estimate one set of parameters for each possible class.",
"cite_spans": [
{
"start": 112,
"end": 134,
"text": "(Guthrie et al., 1994;",
"ref_id": "BIBREF6"
},
{
"start": 135,
"end": 160,
"text": "McCallum and Nigam, 1998)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multinomial Sampling Model",
"sec_num": null
},
{
"text": "Using multinomial sampling, the term p(d|c) has distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multinomial Sampling Model",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p multinomial (d|\u03b8) = j d j ! j (d j !) j \u03b8 d j j",
"eq_num": "(2)"
}
],
"section": "Multinomial Sampling Model",
"sec_num": null
},
{
"text": "A simple Bayes estimator for \u03b8 can be obtained by taking the prior for \u03b8 as a Dirichlet distribution, in which case the posterior is also Dirichlet. Denote the total training data for the class in question as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multinomial Sampling Model",
"sec_num": null
},
{
"text": "D = {(d 11 ...d 1v ) ... (d k1 ...d kv )} (that is, counts of each of v words in k documents). Then if p(\u03b8) \u223c Dirichlet(\u03b1 1 ...\u03b1 v )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multinomial Sampling Model",
"sec_num": null
},
{
"text": ", the mean of p(\u03b8|D) for the j-th component of \u03b8 (which is the estimate we use) is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multinomial Sampling Model",
"sec_num": null
},
{
"text": "\u03b8 j = E[\u03b8 j |D] = \u03b1 j + n j j \u03b1 j + n \u2022 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multinomial Sampling Model",
"sec_num": null
},
{
"text": "where the n j are the sufficient statistics i n ij , and n \u2022 is j n j . We follow common practice and use the standard reference Dirichlet prior, which is uniform on \u03b8, such that \u03b1 j = 1 for all j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multinomial Sampling Model",
"sec_num": null
},
{
"text": "In contrast to the model above, a hierarchical sampling model assumes that\u03b8 varies between documents, and has distribution which depends upon parameters \u03b7. This allows for a more realistic model, letting the probabilities of using words vary between documents subject only to some general trend.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Sampling Models",
"sec_num": "3.1"
},
{
"text": "For example, consider documents about politics: some will discuss the current British Prime Minister, Gordon Brown. In these documents, the probability of using the word brown (assuming case normalisation) may be relatively high. Other politics articles may discuss US politics, for example, or the UN, French elections, and so on, and these articles may have a much lower probability of using the word brown: perhaps just the occasional reference to the Prime Minister. A hierarchical model attempts to model the way this probability varies between documents in the politics class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Sampling Models",
"sec_num": "3.1"
},
{
"text": "Starting with the joint distribution p(\u03b8, d|\u03b7) and averaging over all possible values that \u03b8 may take in the new document gives:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Sampling Models",
"sec_num": "3.1"
},
{
"text": "p(d|\u03b7) =\u02c6p(\u03b8|\u03b7)p(d|\u03b8) d\u03b8 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Sampling Models",
"sec_num": "3.1"
},
{
"text": "where integration is understood to be over the entire range of possible \u03b8. Intuitively, this allows \u03b8 to vary between documents subject to the restriction that\u03b8 \u223c p(\u03b8|\u03b7), and the probability of observing a document is the average of its probability for all possible \u03b8, weighted by p(\u03b8|\u03b7). The sampling process is 1) \u03b8 is first sampled from p(\u03b8|\u03b7) and then 2) d is sampled from p(d|\u03b8), leading to the hierarchical name for such models. (Madsen et al., 2005 ) suggest a form of (4) where p(\u03b8|\u03b7) is Dirichlet-distributed, leading to a Dirichlet-Compound-Multinomial sampling distribution. The main benefit of this assumption is that the integral of (4) can be obtained in closed form. Thus p(d|\u03b1) (using the standard \u03b1 notation for Dirichlet parameters) has distribution:",
"cite_spans": [
{
"start": 435,
"end": 455,
"text": "(Madsen et al., 2005",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Sampling Models",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p DCM (d|\u03b1) = j d j ! j (d j !) \u00d7 \u0393 j \u03b1 j \u0393 j d j + \u03b1 j \u00d7 j \u0393(\u03b1 j + d j ) \u0393(\u03b1 j )",
"eq_num": "(5)"
}
],
"section": "Dirichlet Compound Multinomial Sampling Model",
"sec_num": null
},
{
"text": "Maximum likelihood estimates for the \u03b1 are difficult to obtain, since the likelihood for \u03b1 is a function which must be maximised for all components simultaneously, leading some authors to use approximate distributions to improve the tractability of maximum likelihood estimation (Elkan, 2006) . In contrast, we reparameterise the Dirichlet compound multinomial, and estimate some of the parameters in closed form.",
"cite_spans": [
{
"start": 279,
"end": 292,
"text": "(Elkan, 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Compound Multinomial Sampling Model",
"sec_num": null
},
{
"text": "We reparameterise the model in terms of \u00b5 and \u03bb -\u00b5 is a vector of length v, and \u03bb is a constant which reflects the variance of \u03b8. Under this parametrisation, \u03b1 j = \u03bb\u00b5 j . The estimate we use for \u00b5 j is simply:\u03bc",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Compound Multinomial Sampling Model",
"sec_num": null
},
{
"text": "j = n j n \u2022 (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Compound Multinomial Sampling Model",
"sec_num": null
},
{
"text": "where n j and n \u2022 are defined above. This simply matches the first moment about the mean of the distribution with the first moment about the mean of the sample. Once again letting:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Compound Multinomial Sampling Model",
"sec_num": null
},
{
"text": "D = {d 1 ...d k } = {(d 11 ...d 1v )...(d k1 ...d kv )}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Compound Multinomial Sampling Model",
"sec_num": null
},
{
"text": "denote the training data such that the d i are individual document vectors and d ij are counts of the j-th word in the i-th document, the likelihood for \u03bb is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Compound Multinomial Sampling Model",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(\u03bb) = i \u0393( j \u03bb\u00b5 j ) \u0393( j d ij + \u03bb\u00b5 j ) j \u0393(\u03bb\u00b5 j + d ij ) \u0393(\u03bb\u00b5 j )",
"eq_num": "(7)"
}
],
"section": "Dirichlet Compound Multinomial Sampling Model",
"sec_num": null
},
{
"text": "This is a one-dimensional function, and as such is much more simple to maximise using standard optimisation techniques, for example as in (Minka, 2000) .",
"cite_spans": [
{
"start": 138,
"end": 151,
"text": "(Minka, 2000)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Compound Multinomial Sampling Model",
"sec_num": null
},
{
"text": "As before, however, simple maximum likelihood estimates alone are not sufficient: if a word fails to appear at all in D, the corresponding \u00b5 j will be zero, in which case the distribution is improper. The theoretically sound solution would be to incorporate a prior on either \u03b1 or (under our parameterisation) \u00b5; however, this would lead to high computational cost as the resulting posterior would be complicated to work with. (Madsen et al., 2005) instead set each\u03b1 j as the maximum likelihood estimate plus some , in some ways echoing the estimation of \u03b8 for the multinomial model. Unfortunately, unlike a prior this strategy has the same effect regardless of the amount of training data available, whereas any true prior would have diminishing effect as the amount of training data increased. Instead, we supplement actual training data with a pseudo-document in which every word occurs once (note this is quite different to setting = 1); this echoes the effect of a true prior on \u00b5, but without the computational burden.",
"cite_spans": [
{
"start": 427,
"end": 448,
"text": "(Madsen et al., 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Compound Multinomial Sampling Model",
"sec_num": null
},
{
"text": "Despite its apparent convenience and theoretical well-foundedness, the Dirichlet compound multinomial model has one serious drawback, which is emphasised by the reparameterisation. Under the Dirichlet, there is a functional dependence between the expected value of \u03b8 j , \u00b5 j and its variance, where the relationship is regulated by the constant \u03bb. Thus two words whose \u00b5 j are the same will also have the same variance in the \u03b8 j . This is of concern since different words have different patterns of use -to use a popular turn of phrase, some words are more \"bursty\" than others (see (Church and Gale, 1995) for examples). In practice, we may hope to model different words as having the same expected value, but drastically different variancesunfortunately, this is not possible using the Dirichlet model.",
"cite_spans": [
{
"start": 584,
"end": 607,
"text": "(Church and Gale, 1995)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "The difficulty with switching to a different model is the evaluation of the integral in (4). The integral is in fact in many thousands of dimensions, and even if it were possible to evaluate such an integral numerically, the process would be exceptionally slow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "We overcome this problem by decomposing the term p(d|\u03b7) into a product of independent terms of the form p(d j |\u03b7 j ). A natural way for each of these terms to be distributed is to let the probability p(d j |\u03b8 j ) be binomial and to let p(\u03b8 j |\u03b7 j ) be betadistributed. The probability p(d j |\u03b7 j ) (where \u03b7 j = {\u03b1 j , \u03b2 j }, the parameters of the beta distribution) is then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "p bb (d j |\u03b1 j , \u03b2 j ) = n d j B(d j + \u03b1 j , n \u2212 d j + \u03b2 j ) B(\u03b1 j , \u03b2 j ) (8) where B(\u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "is the Beta function. The term p(d|\u03b7) is then simply:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p beta\u2212binomial (d|\u03b7) = j p(d j |\u03b7 j )",
"eq_num": "(9)"
}
],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "This allows means and variances for each of the \u03b8 j to be specified separately, but this comes at a price: while the Dirichlet ensures that j \u03b8 j = 1 for all possible \u03b8, the model above does not. Thus the model is only an approximation to a true model where components of \u03b8 have independent means and variances, and the requirements of the multinomial are fulfilled. However, given the inflexibility of the Dirichlet multinomial model, we argue that such a sacrifice is justified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "In order to estimate parameters of the Beta-Binomial model, we take a slight departure from both (Lowe, 1999) and (Jansche, 2003) who have both used a similar model previously for individual words. (Lowe, 1999) uses numerical techniques to find maximum likelihood estimates of the \u03b1 j and \u03b2 j , which was feasible in that case because of the highly restricted vocabulary and two-classes. (Jansche, 2003) argues exactly this point, and uses moment-matched estimates; our estimation is similar to that, in that we use moment-matching, but different in other regards.",
"cite_spans": [
{
"start": 97,
"end": 109,
"text": "(Lowe, 1999)",
"ref_id": "BIBREF14"
},
{
"start": 114,
"end": 129,
"text": "(Jansche, 2003)",
"ref_id": "BIBREF7"
},
{
"start": 198,
"end": 210,
"text": "(Lowe, 1999)",
"ref_id": "BIBREF14"
},
{
"start": 388,
"end": 403,
"text": "(Jansche, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "Conventional parameter estimates are affected (in some way or other) by the likelihood function for a parameter, and the likelihood function is such that longer documents exert a greater influence on the overall likelihood for a parameter. That is, we note that if the true binomial parameter \u03b8 ij for the j-th word in the i-th document were known, then the most sensible expected value for the distribution over \u03b8 j would be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E [\u03b8 j ] = 1 k \u00d7 k i=1 \u03b8 ij",
"eq_num": "(10)"
}
],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "Whereas the expected value of conventional method-of-moments estimate is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E [\u03b8 j ] = k i=1 p (\u03b8 ij ) \u00d7\u03b8 ij",
"eq_num": "(11)"
}
],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "That is, a weighted mean of the maximum likelihood estimates of each of the \u03b8 ij , with weights given by p (\u03b8 ij ), i.e. the length of the i-th document. Similar effects would be observed by maximising the likelihood function numerically. This is to our minds undesireable, since we do note believe that longer documents are necessarily more representative of the population of all documents than are shorter ones (indeed, extremely long documents are likeliy to be an oddity), and in any case the goal is to capture variation in the parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "This leads us to suggest estimates for parameters such that the expected value of the distribution is as in 10 but with the \u03b8 ij (which are unknown) replaced with their maximum likelihood estimates, \u03b8 ij . We then use these estimates to specify the desired variance, leading to the simultaneous equations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 j \u03b1 j + \u03b2 j = i\u03b8 ij k (12) \u03b1 j \u03b2 j (\u03b1 j + \u03b2 j ) 2 (\u03b1 j + \u03b2 j + 1) = i (\u03b8 ij \u2212 E[\u03b8 j ]) 2 k",
"eq_num": "(13)"
}
],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "As before, we supplement actual training documents with a pseudo-document in which every word occurs once to prevent any \u03b1 j being zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Beta-Binomial Sampling Model",
"sec_num": null
},
{
"text": "This section describes evaluation of the models above on four text classification problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating the Models",
"sec_num": "4"
},
{
"text": "The Newsgroups task is to classify postings into one of twenty categories, and uses data originally collected in (Lang, 1995) . The task involves a relatively large number of documents (approximately 20,000) with roughly even distribution of messages, giving a very low baseline of approximately 5%.",
"cite_spans": [
{
"start": 113,
"end": 125,
"text": "(Lang, 1995)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating the Models",
"sec_num": "4"
},
{
"text": "For the second task, we use a task derived from the Enron mail corpus (Klimt and Yang, 2004) , described in (Allison and Guthrie, 2008) . Corpus is a nine-way email authorship attribution problem, with 4071 emails (between 174 and 706 emails per author) 1 . The mean length of messages in the corpus is 75 words.",
"cite_spans": [
{
"start": 70,
"end": 92,
"text": "(Klimt and Yang, 2004)",
"ref_id": "BIBREF11"
},
{
"start": 108,
"end": 135,
"text": "(Allison and Guthrie, 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating the Models",
"sec_num": "4"
},
{
"text": "WebKB is a web-page classification task, where the goal is to determine the webpage type of the unseen document. We follow the setup of (McCallum and Nigam, 1998) and many thereafter, and use the four biggest categories, namely student, faculty, course and project. The resulting corpus consists of approximately 4,200 webpages.",
"cite_spans": [
{
"start": 136,
"end": 162,
"text": "(McCallum and Nigam, 1998)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating the Models",
"sec_num": "4"
},
{
"text": "The SpamAssassin corpus is made available for public use as part of the open-source Apache Spa-mAssassin Project 2 . It consists of email divided into three categories: Easy Ham, which is email unambiguously ham (i.e. not spam), Hard Ham which is not spam but shares many traits with spam, and finally Spam. The task is to apply these labels to unseen emails. We use the latest version of all datasets, and combine the easy ham and easy ham 2 as well as spam and spam 2 sets to form a corpus of just over 6,000 messages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating the Models",
"sec_num": "4"
},
{
"text": "In all cases, we use 10-fold cross validation to make maximal use of the data, where folds are chosen by random assignment. We define \"words\" to be contiguous whitespace-delimited alpha-numeric strings, and perform no stemming or stoplisting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating the Models",
"sec_num": "4"
},
{
"text": "For the purposes of comparison, we also present results using a linear SVM (Joachims, 1999) , which we convert to multi-class problems using a one-versus-all strategy shown to be amongst the best performing strategies (Rennie and Rifkin, 2001) . We normalise documents to be vectors of unit length, and resolve decision ambiguities by sole means of distance to the hyperplane. We also note that experimentation with non-linear kernels showed no consistent trends, and made very little difference to performance. Table 1 displays results for the three models over the four datasets. We use the simplest measure of classifier performance, accuracy, which is simply the total number of correct decisions over the ten folds, divided by the size of the corpus. In response to a growing unease over the use of significance tests (because they have a tendency to overstate significance, as well as obscure effects of sample size) we provide 95% intervals for accuracy as well as the metric itself. To calculate these, we view accuracy as an (unknown) parameter to a binomial distribution such that the number of correctly classified documents is a binomially distributed random variable. We then calculate the Bayesian interval for the parameter, as described in (Brown et al., 2001) , which allows immediate quantification of uncertainty in the true accuracy after a limited sample.",
"cite_spans": [
{
"start": 75,
"end": 91,
"text": "(Joachims, 1999)",
"ref_id": "BIBREF9"
},
{
"start": 218,
"end": 243,
"text": "(Rennie and Rifkin, 2001)",
"ref_id": "BIBREF18"
},
{
"start": 1256,
"end": 1276,
"text": "(Brown et al., 2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 512,
"end": 519,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating the Models",
"sec_num": "4"
},
{
"text": "As can be seen from the performance figures, no one classifier is totally dominant, although there are obvious and substantial gains in using the Beta-Binomial model on the Newsgroups and Enron tasks when compared to all other models. The Spa-mAssassin corpus shows the beta-binomial model and the SVM to be considerably better than the other two models, but there is little to choose between them. The WebKB task, however, shows extremely unusual results: the SVM is head and shoulders above other methods, and of the generative approaches the multinomial is clearly superior. In all cases, the Dirichlet model actually performs worse than the multinomial model, in contrast to the observations of (Madsen et al., 2005) .",
"cite_spans": [
{
"start": 699,
"end": 720,
"text": "(Madsen et al., 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In terms of comparison with other work, we note that the performance of our multinomial model agrees with that in other work, including for example (Rennie et al., 2003; Eyheramendy et al., 2003; Madsen et al., 2005; Jansche, 2003) . Our Dirichlet model performs worse than that in (Madsen et al., 2005 ) (85% here compared to 89% in that work), which we attribute to their experimentation with alternate smoothing as described in \u00a73.1. We note however that the Beta-Binomial model here still outperforms that work by some considerable margin. Finally, we note that our beta-binomial model outperforms that in (Jansche, 2003) , which we attribute mainly to the altered estimate, but also to the partial vocabulary used in that work. In fact, (Jansche, 2003) shows there to be little to separate the beta-binomial and multinomial models for larger vocabularies, in stark contrast to the work here, and this is doubtless due to the parameter estimation.",
"cite_spans": [
{
"start": 148,
"end": 169,
"text": "(Rennie et al., 2003;",
"ref_id": "BIBREF19"
},
{
"start": 170,
"end": 195,
"text": "Eyheramendy et al., 2003;",
"ref_id": "BIBREF5"
},
{
"start": 196,
"end": 216,
"text": "Madsen et al., 2005;",
"ref_id": "BIBREF15"
},
{
"start": 217,
"end": 231,
"text": "Jansche, 2003)",
"ref_id": "BIBREF7"
},
{
"start": 282,
"end": 302,
"text": "(Madsen et al., 2005",
"ref_id": "BIBREF15"
},
{
"start": 610,
"end": 625,
"text": "(Jansche, 2003)",
"ref_id": "BIBREF7"
},
{
"start": 742,
"end": 757,
"text": "(Jansche, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "One might expect performance of a hierarchical sampling model to eclipse that of the SVM because of the nature of the decision boundary, provided certain conditions are met: the SVM estimates a linear decision boundary, and the multinomial classifier does the same. However, the decision boundaries for the hierarchical classifiers are nonlinear, and can represent more complex word behaviour, provided that sufficient data exist to predict it. However, unlike generic non-linear SVMs (which made little difference compared to a linear SVM) the non-linear decision boundary here arises naturally from a model of word behaviour.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "For the hierarchical models, performance rests on the ability to estimate both the rate of word occurrence \u03b8 j and also the way that this rate varies between documents. To reliably estimate variance (and arguably rate as well) would require words to occur a sufficient number of times. However, this section will demonstrate that two of the datasets have many words which do not occur with sufficient frequency to estimate parameters, and in those two the linear SVM's performance is more comparable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "We present two quantifications of word reuse to support our conclusions. The first are frequency spectra for each of the four corpora, shown in Figure 1. The two more problematic datasets appear in the top of the figure. To generate the charts, we pool all documents from all classes in a each problem, and count the number of words that appear once, twice, and so on. The x axis is the number of times a word occurs, and the y axis the total number of words which have that count.",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 150,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "The WebKB corpus has the large majority of words occurring very few times (the mass of the distribution is concentrated towards the left of the chart), while the SpamAssassin corpus is more reasonable and the Newsgroups corpus has by far the most words which occur with substantial frequency (this correlates perfectly with the relative performances of the classifiers on these datasets). For the Enron corpus, it is somewhat harder to tell, since its size means no words occur with substantial frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "We also consider the proportion of all word pairs in a corpus in which the first word is the same as the second word. If a corpus has n \u2022 words total with total counts n 1 ...n v then the statistic is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r = 1 (n \u2022 (n \u2022 \u2212 1)) /2 i (n i (n i \u2212 1))/2.",
"eq_num": "(14)"
}
],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "To measure differing tendencies to reuse words, we calculate the r statistic once for each class, and then its mean across all classes in a problem (Table 2). We note that the two corpora on which the hierarchical model dominates have much greater tendency for word reuse, meaning the extra parameters can be esimated with greater accuracy. The SpamAssassin corpus is, by this measure, a harder task, but this is somewhat mitigated by the more even frequency distribution evidenced in Figure 1; Table 1 : Performance of four classifiers on four tasks. Error is 95% interval for accuracy. Bold denotes best performance on a task. * denotes performance superior to multinomial which exceeds posterior uncertainty (i.e. observed performance outside 95% interval). + denotes the same for the SVM Figure 1: Frequency spectra for the four datasets. y axis is on a logarithmic scale not look promising for the hierarchical model by either measure.",
"cite_spans": [],
"ref_spans": [
{
"start": 485,
"end": 494,
"text": "Figure 1;",
"ref_id": null
},
{
"start": 495,
"end": 502,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "In this paper, we have advocated the use of a joint beta-binomial distribution for word counts in documents for the purposes of classification. We have shown that this model outperforms classifiers based upon both multinomial and Dirichlet Compound Multinomial distributions for word counts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We have further made the case that, where corpora are sufficiently large as to warrant it, a generative classifier employing a hierarchical sampling model outperforms a discriminative linear SVM. We attribute this to the capacity of the proposed model to capture aspects of word behaviour be-yond a simpler model. However, in cases where the data contain many infrequent words and the tendency to reuse words is relatively low, defaulting to a linear classifier (either the multinomial for a generative classifier, or preferably the linear SVM) increases performance relative to a more complex model, which cannot be fit with sufficient precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The corpus is available for download from www.dcs.shef.ac.uk/\u02dcben.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The corpus is available online at http://spamassassin.apache.org/publiccorpus/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Authorship attribution of e-mail: Comparing classifiers over a new corpus for evaluation",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Allison",
"suffix": ""
},
{
"first": "Louise",
"middle": [],
"last": "Guthrie",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC'08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allison, Ben and Louise Guthrie. 2008. Authorship at- tribution of e-mail: Comparing classifiers over a new corpus for evaluation. In Proceedings of LREC'08.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Interval estimation for a binomial proportion",
"authors": [
{
"first": "Lawrence",
"middle": [
"D"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Anirban",
"middle": [],
"last": "Das-Gupta",
"suffix": ""
}
],
"year": 2001,
"venue": "Statistical Science",
"volume": "16",
"issue": "2",
"pages": "101--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, Lawrence D., Tony Cai, and Anirban Das- Gupta. 2001. Interval estimation for a binomial pro- portion. Statistical Science, 16(2):101-117, may.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Table 2: Mean r statistic for the four problems Church, K. and W. Gale. 1995. Poisson mixtures",
"authors": [],
"year": null,
"venue": "Natural Language Engineering",
"volume": "1",
"issue": "2",
"pages": "163--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table 2: Mean r statistic for the four problems Church, K. and W. Gale. 1995. Poisson mixtures. Nat- ural Language Engineering, 1(2):163-190.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Inductive learning algorithms and representations for text categorization",
"authors": [
{
"first": "Susan",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Platt",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Heckerman",
"suffix": ""
},
{
"first": "Mehran",
"middle": [],
"last": "Sahami",
"suffix": ""
}
],
"year": 1998,
"venue": "CIKM '98",
"volume": "",
"issue": "",
"pages": "148--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dumais, Susan, John Platt, David Heckerman, and Mehran Sahami. 1998. Inductive learning algo- rithms and representations for text categorization. In CIKM '98, pages 148-155.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Clustering documents with an exponential-family approximation of the dirichlet compound multinomial distribution",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Elkan",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Twenty-Third International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elkan, Charles. 2006. Clustering documents with an exponential-family approximation of the dirich- let compound multinomial distribution. In Proceed- ings of the Twenty-Third International Conference on Machine Learning.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The naive bayes model for text categorization. Artificial Intelligence and Statistics",
"authors": [
{
"first": "S",
"middle": [],
"last": "Eyheramendy",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Madigan",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eyheramendy, S., D. Lewis, and D. Madigan. 2003. The naive bayes model for text categorization. Arti- ficial Intelligence and Statistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Document classification by machine: theory and practice",
"authors": [
{
"first": "Louise",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "Elbert",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Guthrie",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings COLING '94",
"volume": "",
"issue": "",
"pages": "1059--1063",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guthrie, Louise, Elbert Walker, and Joe Guthrie. 1994. Document classification by machine: theory and practice. In Proceedings COLING '94, pages 1059- 1063.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Parametric models of linguistic count data",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Jansche",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL '03",
"volume": "",
"issue": "",
"pages": "288--295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jansche, Martin. 2003. Parametric models of linguistic count data. In ACL '03, pages 288-295.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Text categorization with support vector machines: learning with many relevant features",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of ECML-98, 10th European Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "137--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, Thorsten. 1998. Text categorization with support vector machines: learning with many rele- vant features. In N\u00e9dellec, Claire and C\u00e9line Rou- veirol, editors, Proceedings of ECML-98, 10th Euro- pean Conference on Machine Learning, pages 137- 142.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Making large-scale svm learning practical",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, Thorsten. 1999. Making large-scale svm learning practical. Advances in Kernel Methods - Support Vector Learning.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distribution of content words and phrases in text and language modelling",
"authors": [
{
"first": "Slava",
"middle": [
"M"
],
"last": "Katz",
"suffix": ""
}
],
"year": 1996,
"venue": "Nat. Lang. Eng",
"volume": "2",
"issue": "1",
"pages": "15--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katz, Slava M. 1996. Distribution of content words and phrases in text and language modelling. Nat. Lang. Eng., 2(1):15-59.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The enron corpus: A new dataset for email classification research",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Klimt",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ECML 2004",
"volume": "",
"issue": "",
"pages": "217--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klimt, Bryan and Yiming Yang. 2004. The enron cor- pus: A new dataset for email classification research. In Proceedings of ECML 2004, pages 217-226.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "NewsWeeder: learning to filter netnews",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 12th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "331--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lang, Ken. 1995. NewsWeeder: learning to filter net- news. In Proceedings of the 12th International Con- ference on Machine Learning, pages 331-339.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Principled hybrids of generative and discriminative models",
"authors": [
{
"first": "Julia",
"middle": [
"A"
],
"last": "Lasserre",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"P"
],
"last": "Bishop",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Minka",
"suffix": ""
}
],
"year": 2006,
"venue": "CVPR '06: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "87--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lasserre, Julia A., Christopher M. Bishop, and Thomas P. Minka. 2006. Principled hybrids of gen- erative and discriminative models. In CVPR '06: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recog- nition, pages 87-94.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The beta-binomial mixture model and its application to tdt tracking and detection",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the DARPA Broadcast News Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lowe, S. 1999. The beta-binomial mixture model and its application to tdt tracking and detection. In Pro- ceedings of the DARPA Broadcast News Workshop.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Modeling word burstiness using the Dirichlet distribution",
"authors": [
{
"first": "Rasmus",
"middle": [
"E"
],
"last": "Madsen",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kauchak",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Elkan",
"suffix": ""
}
],
"year": 2005,
"venue": "ICML '05",
"volume": "",
"issue": "",
"pages": "545--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Madsen, Rasmus E., David Kauchak, and Charles Elkan. 2005. Modeling word burstiness using the Dirichlet distribution. In ICML '05, pages 545-552.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A comparison of event models for na\u00efve bayes text classification",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Nigam",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings AAAI-98 Workshop on Learning for Text Categorization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCallum, A. and K. Nigam. 1998. A comparison of event models for na\u00efve bayes text classification. In Proceedings AAAI-98 Workshop on Learning for Text Categorization.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Estimating a dirichlet distribution",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Minka",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minka, Tom. 2000. Estimating a dirichlet distribution. Technical report, Microsoft Research.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improving multiclass text classification with the Support Vector Machine",
"authors": [
{
"first": "Jason",
"middle": [
"D M"
],
"last": "Rennie",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Rifkin",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rennie, Jason D. M. and Ryan Rifkin. 2001. Improv- ing multiclass text classification with the Support Vector Machine. Technical report, Massachusetts In- sititute of Technology, Artificial Intelligence Labora- tory.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Tackling the poor assumptions of naive bayes text classifiers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rennie",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Shih",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Teevan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Karger",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rennie, J., L. Shih, J. Teevan, and D. Karger. 2003. Tackling the poor assumptions of naive bayes text classifiers.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A re-examination of text categorization methods",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 1999,
"venue": "22nd Annual International SIGIR",
"volume": "",
"issue": "",
"pages": "42--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang, Y. and X. Liu. 1999. A re-examination of text categorization methods. In 22nd Annual Interna- tional SIGIR, pages 42-49, Berkley, August.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "",
"num": null,
"uris": null
},
"TABREF0": {
"type_str": "table",
"html": null,
"num": null,
"text": "on the other hand, the WebKB corpus does Bin 91.65 \u00b1 0.4 * + 83.54 \u00b1 1.14 * + 84.81 \u00b1 1.08 97.35 \u00b1 0.4 * SVM 88.8 \u00b1 0.45 * 80 \u00b1 1.23 * 92.68 \u00b1 0.79 * 97.65 \u00b1 0.38 *",
"content": "<table><tr><td/><td>Newsgroups</td><td>Enron Authors</td><td>WebKB</td><td>SpamAssassin</td></tr><tr><td>Multinomial</td><td>85.66 \u00b1 0.5</td><td>74.55 \u00b1 1.34</td><td>85.69 \u00b1 1.06</td><td>95.96 \u00b1 0.5</td></tr><tr><td>DCM</td><td>85.03 \u00b1 0.51</td><td>74.43 \u00b1 1.34</td><td>82.69 \u00b1 1.15</td><td>91.47 \u00b1 0.7</td></tr><tr><td>Beta-</td><td/><td/><td/><td/></tr></table>"
}
}
}
}