| { |
| "paper_id": "R11-1043", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:04:12.637058Z" |
| }, |
| "title": "Domain Independent Authorship Attribution without Domain Adaptation", |
| "authors": [ |
| { |
| "first": "Rohith", |
| "middle": [ |
| "K" |
| ], |
| "last": "Menon", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stony Brook University", |
| "location": {} |
| }, |
| "email": "rkmenon@cs.stonybrook.edu" |
| }, |
| { |
| "first": "Yejin", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "ychoi@cs.stonybrook.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Automatic authorship attribution, by its nature, is much more advantageous if it is domain (i.e., topic and/or genre) independent. That is, many real world problems that require authorship attribution may not have in-domain training data readily available. However, most previous work based on machine learning techniques focused only on in-domain text for authorship attribution. In this paper, we present comprehensive evaluation of various stylometric techniques for cross-domain authorship attribution. From the experiments based on the Project Gutenberg book archive, we discover that extremely simple techniques based on stopwords are surprisingly robust against domain change, essentially ridding the need for domain adaptation when supplied with a large amount of data.", |
| "pdf_parse": { |
| "paper_id": "R11-1043", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Automatic authorship attribution, by its nature, is much more advantageous if it is domain (i.e., topic and/or genre) independent. That is, many real world problems that require authorship attribution may not have in-domain training data readily available. However, most previous work based on machine learning techniques focused only on in-domain text for authorship attribution. In this paper, we present comprehensive evaluation of various stylometric techniques for cross-domain authorship attribution. From the experiments based on the Project Gutenberg book archive, we discover that extremely simple techniques based on stopwords are surprisingly robust against domain change, essentially ridding the need for domain adaptation when supplied with a large amount of data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Many real world problems that require authorship attribution, such as forensics (e.g., Luyckx and Daelemans (2008) ) or authorship dispute for old literature (e.g., Mosteller and Wallace (1984) ) may not have in-domain training data readily available. However, most previous work to date has focused on authorship attribution only for in-domain text (e.g., Stamatatos et al. (1999) , Luyckx and Daelemans (2008) , Raghavan et al. (2010) ). On limited occasions researchers include heterogeneous (cross-domain) dataset in their experiments, but they only report the performance on heterogeneous dataset is much lower than that of homogeneous dataset, rather than directly tacking the problem of cross-domain or domain independent authorship attribution (e.g., Peng et al. (2003) ).", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 114, |
| "text": "Luyckx and Daelemans (2008)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 165, |
| "end": 193, |
| "text": "Mosteller and Wallace (1984)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 357, |
| "end": 381, |
| "text": "Stamatatos et al. (1999)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 384, |
| "end": 411, |
| "text": "Luyckx and Daelemans (2008)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 414, |
| "end": 436, |
| "text": "Raghavan et al. (2010)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 759, |
| "end": 777, |
| "text": "Peng et al. (2003)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The lack of research for cross-domain scenarios is perhaps only reasonable, given that it is understood in the community that the prediction power of machine learning techniques does not transfer well over different domains (e.g., Blitzer et al. (2008) ). However, the seminal work of Blitzer et al. (2006) has shown that it is possible to mitigate the problem by examining distributional difference of features across different domains, and derive features that are robust against domain switch. Therefore, one could expect that applying domain adaptation techniques to authorship attribution can also help with cross-domain authorship attribution.", |
| "cite_spans": [ |
| { |
| "start": 231, |
| "end": 252, |
| "text": "Blitzer et al. (2008)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 285, |
| "end": 306, |
| "text": "Blitzer et al. (2006)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Before hasting into domain adaptation for authorship attribution, we take a slightly different push to the problem: we first examine whether there exist domain-independent features that rarely change across different domains. If this is the case, and if such features are sufficiently informative, then domain adaptation might not be required at all to achieve high performance in domain-independent authorship attribution. Therefore, we conduct a comprehensive empirical evaluation using various stylistic features that are likely to be common across different topics and genre.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "From the experiments based on the Project Gutenberg book archive, we indeed discover stylistic features that are common across different domains. Against our expectations, some of such features, stop-words in particular, are extremely informative, essentially ridding of the need for domain adaptation, if supplied with a large amount of data. Due to its simplicity, techniques based on stop-words scale particularly well over a large amount of data, in comparison to more computationally heavy techniques that require parsing (e.g., Raghavan et al. (2010) ).", |
| "cite_spans": [ |
| { |
| "start": 534, |
| "end": 556, |
| "text": "Raghavan et al. (2010)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The study of authorship attribution requires careful preparation of dataset, in order not to draw overly optimistic conclusions. For instance, if the dataset consists of text where each author writes about a distinctive and exclusive topic, the task of author attribution reduces to topic categorization, a much easier task in general (e.g., (Mikros and Argiri, 2007) ). Such statistical models that rely on topics will not generalize well over text in previously unseen topics or genre. Random collection of data is not the solution to this concern, as many authors are biased toward certain topics and genre. In order to avoid such pitfall of inadvertently benefiting from topic bias, we propose two different ways of data preparation: First approach is to ensure that multiple number of authors are included per topic and genre, so that it is hard to predict the author purely based on topical words. Second approach is to ensure that multiple domains (i.e., topics and/or genre) are included per author, and that test dataset includes domains that are previously unseen in the training data. Next we discuss stylistic features that are likely to be common across different domains. In this study, we compare the following set of features:", |
| "cite_spans": [ |
| { |
| "start": 342, |
| "end": 367, |
| "text": "(Mikros and Argiri, 2007)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Domain Independent Cues for Author Identification", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(1) n-gram sequences as a baseline, (2) part of speech sequences that capture shallow syntactic patterns, (3) modified tf \u2212 idf for n-gram that captures repeated phrases, (4) mood words that capture author's unique emotional traits, and (5) stop word frequencies that capture author's writing habit with common words. Each of these features is elaborated below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Domain Independent Cues for Author Identification", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We conjecture that N-gram sequences are not robust against domain changes, as N-grams are powerful features for topic categorization (e.g., (T\u00fcrko\u01e7lu et al., 2007) ). We therefore set N-gram based features as baseline to quantify how much domain change affects the performance. Normalized frequency of the most frequent 100 stemmed (Porter, 1997) 3-grams 1 are encoded as features.", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 163, |
| "text": "(T\u00fcrko\u01e7lu et al., 2007)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 332, |
| "end": 346, |
| "text": "(Porter, 1997)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "N-gram Sequences as a Topic Dependent Baseline", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "To capture the syntactic patterns unique to authors, we use 3-gram sequence of part-of-speech (POS) tags. To be robust across domain change, we use only the most frequent 100 3-grams of part-ofspeech tags as features. To encode a feature from each such 3-gram POS sequence, we use the frequency of each POS sequence normalized by the number of POS grams in the document. We expect these shallow syntactic patterns will help characterize the favorite sentence structure used by the authors. We make use of Stanford parser (Klein and Manning, 2003) to tag the part-of-speech tags for the given document.", |
| "cite_spans": [ |
| { |
| "start": 521, |
| "end": 546, |
| "text": "(Klein and Manning, 2003)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3-gram Part-of-Speech Sequences to Capture Favorite Sentence Structure", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "T f \u2212idf provides a score to a term indicating how informative each term is, by multiplying the frequency of the term within the document (term frequency) by the rarity of the term across corpus (inverse document frequency). tf \u2212 idf is known to be highly effective for text categorization. In this work, we experiment with modified tf \u2212 idf in order to accommodate the nature of author attribution more directly. We propose two such variants:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modified tf \u2212 idf for 3-gram Sequences", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In this variant, we take inverse-author-frequency instead of inverse-document-frequency, as the terms that occur across many authors are not as informative as the terms unique to a given author. For training data, we compute tf -iAf based on known authors of each document, however in test data, we do not have access to the authors of each document. Therefore, we set tf -iAf of the test data as tf of the test data weighted by iAf of the training data. We generate these features for top 500 3-gram sequences ordered by tf -iAf scores from each author. We compute different tf -iAf values for different authors. The exact formula we use for a given author i is given below:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "tf-iAf -Term-Frequency Inverse-Author-Frequency", |
| "sec_num": null |
| }, |
| { |
| "text": "T f iaf i = K i j=1 f ij N ij * iaf i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "tf-iAf -Term-Frequency Inverse-Author-Frequency", |
| "sec_num": null |
| }, |
| { |
| "text": "In this variant, we augment the previous approach with topic-frequency, which is the number of different topics a given term appears for a given author. We generate these features for top 500 3-gram sequences ordered by tf -iAf -tpf scores from each author. Again, we compute different tf -iAf -tpf values for different authors. The exact formula for a given author i is given below:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "tf-iAf-tpf -Term-Frequency Inverse-Author-Frequency Topic-Frequency", |
| "sec_num": null |
| }, |
| { |
| "text": "T f iaf T pf i = T f iaf i * tpf i 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "tf-iAf-tpf -Term-Frequency Inverse-Author-Frequency Topic-Frequency", |
| "sec_num": null |
| }, |
| { |
| "text": "where we take the second power to the topic frequency, as the number of distinctive topics is small in general.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "tf-iAf-tpf -Term-Frequency Inverse-Author-Frequency Topic-Frequency", |
| "sec_num": null |
| }, |
| { |
| "text": "We conjecture that mood words 2 will reveal unique emotional traits of each author. In particular, either the use of certain types of mood words, or the lack of it, will reveal common mood or tone in documents that is orthogonal to the topics or genre. To encode features based on mood words, we include the normalized frequency of each mood word in a given document in the feature vector. Normalization is done by dividing frequency by total number of words in the document. We consider in total a list of 859 mood words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mood Words to Capture Emotional Traits", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Many researchers reported that the usage patterns of stop-words are a very strong indication of writing style (Arun et al. (2009) , Garca and Martn (2007) ). Based on 659 stop words obtained, we encode features as the frequency of each stopword normalized by total number of words in the document 3 . These normalized frequencies indicate two important characteristics of stop-word usage by authors:", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 129, |
| "text": "(Arun et al. (2009)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 132, |
| "end": 154, |
| "text": "Garca and Martn (2007)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stop-words to Capture Writing Habit", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "(1) Relative usage of function words by authors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stop-words to Capture Writing Habit", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "(2) Fraction of function words in document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stop-words to Capture Writing Habit", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "In order to investigate the topic influence on authorship attribution, we need a dataset that consists of articles written by prolific authors who wrote on a variety of topics. Furthermore, it would be ideal if the dataset already includes topic categorization, so that we do not need to manually categorize each article into different topics and genre.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset with Varying Degree of Domain Change", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Fortunately, there is such a dataset available online: we use the project Gutenberg book archive (http://www.gutenberg.org) that contains an extensive collection of books. In order to remove topic bias in authors, we rely on the catalog of project Gutenberg. Categories of project Gutenberg correspond to the mixture of topics and genre.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset with Varying Degree of Domain Change", |
| "sec_num": "3" |
| }, |
| { |
| "text": "There are two types of categories defined in project Gutenberg: the first is LCSH (Library of Congress Subject Headings) 4 and the second is LCC (Library of Congress Classification). 5 Examples of LCSH and LCC categories are shown in Table 1 and Table 2 respectively. As can be seen in Table 1 , the categories of LCSH are more finegrained, and some of the categories are overlapping eg:\"history\" and \"history and criticism\". In contrast, the categories of LCC are more coarsegrained so that they are more distinctive from each other.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 234, |
| "end": 241, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 246, |
| "end": 253, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 286, |
| "end": 293, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset with Varying Degree of Domain Change", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In the next section, we present following four experiments in the order of increasing difficulty. We use the term topics, genre, and domains interchangeably in what follows, as LCC & LCSH categories are mixed as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset with Varying Degree of Domain Change", |
| "sec_num": "3" |
| }, |
| { |
| "text": "(1) Balanced topic: Topics in the test data are guaranteed to appear in the training data. (2) Semi-disjoint topic using LCSH: Topics in the test data differ from topics in the training according to LCSH. (3) Semi-disjoint topic using LCC: Topics in the test data differ from topics in the training according to LCC. (4) Perfectly-disjoint topic using LCC: Topics in the test data differ from topics in the training according to LCC, and documents with unknown categories are discarded to create perfectly disjoint training and test data, while in (2) and (3) documents with unknown categories are added to maintain large dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset with Varying Degree of Domain Change", |
| "sec_num": "3" |
| }, |
| { |
| "text": "American drama, Eugenics, American poetry, Fairy tales, Architecture, Family, Art, Farm life, Authors, Fiction, Ballads, Fishing, Balloons, France, Children, Harbors, Civil War, History, Conduct of life, History and criticism, Correspondence, History -Revolution, Country life, Cycling, Description and travel, ... ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset with Varying Degree of Domain Change", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We present four experiments in the order of increasing difficulty. In all experiments, we use the SVM classifier with sequential minimal optimization (SMO) implementation available in the Weka package (Hall et al., 2009) . We used polynomial kernel with regularization parameter C = 1.", |
| "cite_spans": [ |
| { |
| "start": 201, |
| "end": 220, |
| "text": "(Hall et al., 2009)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We identify a set of 14 authors who had written at least 25 books and also had written books in at least 6 categories. This amounts to 844 books in total for all authors. Table 3 tabulates the author statistics. In our first experiment, we randomly split the 844 books into 744 training data and 100 testing data with 14 authors. This setting is simpler than true topic disjoint scenario where there is no intersection between topics in training and testing sets. Nevertheless, this setting is not an easy one, as we only consider authors who have written for more than 6 topics, which makes it harder to benefit from topic bias in authors. Note that a random guess will give an accuracy of 1 14 only. Result Table 4 tabulates the accuracy, precision, recall and f-score obtained for various features described in Section 2. Note that f-scores (including precision and recall) are first computed for each author, then we take the macro average over different authors. We perform 8-way cross validation for this setup. The first row -N-GRAMis the baseline. It is interesting that n-gram-based features suffer in this experimental setting already, even though we do not deliberately change the topics across training and test data. All other features demonstrate strong performance, mostly achieving F-score and accuracy well above 90%, with the exception of TfIafTpf. Stop-word based features achieve the highest performance with 98.45% in F-score and 97.96% in accuracy. This echoes previously reported studies (e.g., Arun et al. (2009) ) that indicate that stop words can reveal author's unique writing styles and habits. We are nonetheless surprised to see the performance of stopword based features is higher than that of more sophisticated approaches such as TfIaf or TfIafTpf.", |
| "cite_spans": [ |
| { |
| "start": 1518, |
| "end": 1536, |
| "text": "Arun et al. (2009)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 709, |
| "end": 716, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Balanced Topic Configuration", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "It is unexpected to see that tfiaf-tpf performs worse than tfiaf or POS-grams. We conjecture the cause can be attributed to the fact that we calculate tfiaf-tpf only from the set of books which are categorized by LCC. We calculate tfiaf-tpf only from LCC categorized books because only these categories at the root level are truly disjoint. Because we select tfiaf-tpf ngrams only from the subset of the books in training, it is possible that we could have missed some ngrams which would otherwise have high tfiaf-tpf scores.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Balanced Topic Configuration", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "High performance for mood words, reaching 95.22% in F-score and 95.92% in accuracy confirms our hypothesis that it can reveal author's unique emotional traits that are orthogonal to particular topics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Balanced Topic Configuration", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Note on the Baseline Because the baseline scores are very low, we also experimented with other variants with baselines not included in the table for brevity. First, we tested with increased number of n-grams. That is, instead of using top 100 3-grams per document, we experiment with top 500 3-grams per document. This did not change the performance much however. We also tried to incorporate all 3-grams, but we could not fit such features based on all 3-grams into memory, as our dataset consists of many books in their entirety. We conclude the discussion on the first experiment by highlighting two important observations:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Balanced Topic Configuration", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 First, POS 3-gram features are also based on top 100 POS 3-grams per document, and these unlexicalized features perform extremely well with 91.51% f-score and 91.84% accuracy, using the identical number of features as the baseline. \u2022 Second, all features presented here are highly efficient and scalable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Balanced Topic Configuration", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In the second experiment, we use categories from LCSH. As shown in Table 1 , these categories were not completely disjoint. As a result, we split training and test data with manual inspection on the LCSH categories to ensure training and test data are as disjoint as possible. In this experiment, we focus on 6 authors out of 14 authors considered in the previous dataset in order to make it easier to split training and test data based on disjoint topics. In particular, we place books in fiction, essays and history categories in the training set, and the rest in the test set. This results in 202 books for training and 72 books for testing. Despite our effort, this split is not perfect: first, it might still allow topics with very subtle differences to show up in both training and test data. Second, the training set includes books that are not categorized by LCSH categories. As a re- Table 5 : Semi-Disjoint Topics using LCSH (Experiment-2) sult, these books with unknown categories might accidentally contain books whose topics overlap with the topics included in the test data. Nevertheless, author attribution becomes a much harder task than before, because a significant portion of training and test data consists of disjoint topics.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 67, |
| "end": 74, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 893, |
| "end": 900, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semi-Disjoint Topic using LCSH Configuration", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Result Table 5 tabulates the results. As expected, the overall performance drops for almost all approaches. The only exceptional case is stop word based features, the top performer in the previous experiment. It is astonishing that the performance of stop word based features in fact does not drop at all, achieving 98.72% in f-score and 98.61% in accuracy. As before, the mixture of all features actually decrease the performance. Overall the performance of most approaches look strong however, as most achieve scores well above 80% in f-score and accuracy. Baseline performs very poorly again, as n-grams are more sensitive to topic changes than other features.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 7, |
| "end": 14, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semi-Disjoint Topic using LCSH Configuration", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Configuration For the third experiment, we use categories from LCC instead of LCSH. As described earlier, top categories of LCC are more disjoint than those of LCSH. We choose 5 authors who have written in \"Language and literature\" in addition to other categories. We then create a training set with books in categories that are not \"Language and Literature\". We also include books with unknown categories into the training dataset to maintain a reasonably large dataset. The test set consists of books in a single topic \"Language and Literature\". This split results in 146 books for training, and 112 books for testing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semi-Disjoint Topic using LCC", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Result Table 6 tabulates the result. Again, the f-score (including precision and recall) are first computed per-author, then we take the macro aver- age over all authors. Surprisingly, the performance of all approaches increased. We conjecture the reason to be overlap of unknown categories with categories in the test dataset. Stop word and mood based features achieve 100% prediction accuracy in this setting. However, we should like to point out that this extremely high performance of simple features are attainable only when supplied with sufficiently large amount of data. See Section 4.5 for discussions related to the performance change with reduced data size.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 7, |
| "end": 14, |
| "text": "Table 6", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semi-Disjoint Topic using LCC", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Configuration Finally, we experiment on a set of data which were truly topic independent, and we try to learn the author cues from one topic and use it to predict the authors of books written in different topics. In this experiment, the training set consists of books in a single topic \"Language and Literature\", which used to be the test dataset in the previous experiment. For test, we take the training dataset from the previous experiment and remove those books with unknown categories to enforce fully disjoint topics between training and testing. This split results in 112 documents in the training data and 37 documents in the test data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Perfectly-Disjoint Topic using LCC", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Result Table 7 tabulates the result. Note that this experiment is indeed the harder that the previous experiment, as the performance of the most approaches dropped significantly. Here we find that the performance of tfiaf-tpf is very strong achieving 95.33% in f-score and 94.59% in accuracy. Note that in all of previous experiments, tfiaf-tpf performed considerably worse than tfiaf. This is because this experiment is the only experiment that discards all books with unknown categories, which makes it possible for tfiaf-tpf to exploit the topic information more accurately. In Table 7 : Perfectly-Disjoint Topics using LCC fact, the performance of tfiaf-tpf is now almost as good as that of stop word based features, our all time top performer that achieves 97.13% in f-score and 97.30% in accuracy in this experiment. Mood words and pos-grams, previously high performing approaches do not appear to be very robust with drastic domain changes.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 7, |
| "end": 14, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 581, |
| "end": 588, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Perfectly-Disjoint Topic using LCC", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In this section, we briefly report how the performance of all approaches changes when we reduce the size of the data. For brevity, we report this only with respect to the last experiment. Table 8 show the results, when we reduce the size of data down to 10% and 50% respectively, by taking the first x% of each book in the training and test data. In comparison to Table 7 , overall performance drops with reduced data. From these results, we conclude that (1) when faced with data reduction, the relative performance of stop word based features stands out even more, and that (2) high performance of simple features are attainable when supplied with sufficiently large amount of data. Stamatatos (2009) provides an excellent survey of the field. One of the prominent approaches in authorship attribution is the use of style markers (Stamatatos et al., 1999) . Our approaches make use of such style markers implicitly and more systematically. The work of Peng et al. (2003) by using character level n-grams achieve state-of-the-art performance (90%) on homogeneous (in-domain) but drops significantly (74%) on heterogeneous (cross-domain) data in accuracy. In contrast, we present approaches that perform extremely well even on heterogeneous data.", |
| "cite_spans": [ |
| { |
| "start": 685, |
| "end": 702, |
| "text": "Stamatatos (2009)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 832, |
| "end": 857, |
| "text": "(Stamatatos et al., 1999)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 954, |
| "end": 972, |
| "text": "Peng et al. (2003)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 188, |
| "end": 195, |
| "text": "Table 8", |
| "ref_id": null |
| }, |
| { |
| "start": 364, |
| "end": 371, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Perfectly-Disjoint Topic using LCC with Reduced Data", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "For all ngram based features, 3-gram (N=3) was chosen because increasing N increased sparseness and decreasing N failed to capture common phrases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "where f ij is the frequency of a 3-gram for author i in document D ij , D ij is the jth document by author i , N ij is the total number of 3-grams in document D ij , and K i is the number of documents written by author i . We take the second power of inverse-author-frequency, as the number of authors is much smaller than the number of documents in a corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The list of mood words is obtained from http:// moods85.wordpress.com/mood-list/3 The list of stopwords is obtained from http://www. ranks.nl/resources/stopwords.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.loc.gov/aba/cataloging/ subject/weeklylists/ 5 http://www.loc.gov/catdir/cpso/lcco/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Another interesting technique that is explored for authorship attribution is the use of PCFG in the work of Raghavan et al. (2010) . They show that PCFG models are effective in authorship attribution, although their experiments were conducted only on homogeneous datasets. The approaches studied in this paper are much simpler and highly scalable, while extremely effective.", |
| "cite_spans": [ |
| { |
| "start": 108, |
| "end": 130, |
| "text": "Raghavan et al. (2010)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": null |
| }, |
| { |
| "text": "We have presented a set of features for authorship attribution in a domain independent setting. We have demonstrated that the features we calculate are effective in predicting authorship while being robust against topic changes. We show the robustness of our features against topic changes by evaluating the features under increasing topic disjoint property of training and test documents. These experiments substantiate our claim that the features we propose capture the stylistic traits of authors that persist across multiple domains. The simplicity of our features also makes it scalable and hence can be applied to large scale data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Stopword graphs and authorship attribution in text corpora", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Arun", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Suresh", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "E V" |
| ], |
| "last": "Madhavan", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Semantic Computing, 2009. ICSC '09. IEEE International Conference on", |
| "volume": "", |
| "issue": "", |
| "pages": "192--196", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Arun, V. Suresh, and C.E.V. Madhavan. 2009. Stopword graphs and authorship attribution in text corpora. In Semantic Computing, 2009. ICSC '09. IEEE International Conference on, pages 192 -196.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Domain adaptation with structural correspondence learning", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Blitzer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspon- dence learning. In Conference on Empirical Meth- ods in Natural Language Processing, Sydney, Aus- tralia.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Learning bounds for domain adaptation", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Blitzer", |
| "suffix": "" |
| }, |
| { |
| "first": "Koby", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Kulesza", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "Jenn", |
| "middle": [], |
| "last": "Wortman", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "21", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jenn Wortman. 2008. Learning bounds for domain adaptation. In Advances in Neural In- formation Processing Systems 21, Cambridge, MA. MIT Press.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Function words in authorship attribution studies. Literary and Linguistic Computing", |
| "authors": [ |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Miranda Garca", |
| "suffix": "" |
| }, |
| { |
| "first": "Javier", |
| "middle": [], |
| "last": "Calle Martn", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "22", |
| "issue": "", |
| "pages": "49--66", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Antonio Miranda Garca and Javier Calle Martn. 2007. Function words in authorship attribution studies. Literary and Linguistic Computing, 22(1):49-66.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The weka data mining software: an update", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Eibe", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Holmes", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernhard", |
| "middle": [], |
| "last": "Pfahringer", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Reutemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [ |
| "H" |
| ], |
| "last": "Witten", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "SIGKDD Explor. Newsl", |
| "volume": "11", |
| "issue": "", |
| "pages": "10--18", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: an update. SIGKDD Explor. Newsl., 11:10-18, November.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A parsing: fast exact viterbi parse selection", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology -Volume 1, NAACL '03", |
| "volume": "", |
| "issue": "", |
| "pages": "40--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Klein and Christopher D. Manning. 2003. A parsing: fast exact viterbi parse selection. In Pro- ceedings of the 2003 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics on Human Language Technology -Vol- ume 1, NAACL '03, pages 40-47, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Authorship attribution and verification with many authors and limited data", |
| "authors": [ |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "Luyckx", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "513--520", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kim Luyckx and Walter Daelemans. 2008. Author- ship attribution and verification with many authors and limited data. In Proceedings of the 22nd In- ternational Conference on Computational Linguis- tics (Coling 2008), pages 513-520, Manchester, UK, August. Coling 2008 Organizing Committee.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Investigating topic influence in authorship attribution", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Mikros", |
| "suffix": "" |
| }, |
| { |
| "first": "Eleni", |
| "middle": [ |
| "K" |
| ], |
| "last": "Argiri", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "PAN", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Mikros and Eleni K. Argiri. 2007. Investigat- ing topic influence in authorship attribution. In PAN.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Applied Bayesian and Classical Inference: The Case of the Federalist Papers", |
| "authors": [ |
| { |
| "first": "Frederick", |
| "middle": [], |
| "last": "Mosteller", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "L" |
| ], |
| "last": "Wallace", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frederick Mosteller and David L. Wallace. 1984. Ap- plied Bayesian and Classical Inference: The Case of the Federalist Papers. Springer-Verlag.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Language independent authorship attribution using character level language models", |
| "authors": [ |
| { |
| "first": "Fuchun", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Dale", |
| "middle": [], |
| "last": "Schuurmans", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaojun", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Vlado", |
| "middle": [], |
| "last": "Keselj", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "267--274", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fuchun Peng, Dale Schuurmans, Shaojun Wang, and Vlado Keselj. 2003. Language independent author- ship attribution using character level language mod- els. In Proceedings of the tenth conference on Euro- pean chapter of the Association for Computational Linguistics -Volume 1, EACL '03, pages 267-274, Stroudsburg, PA, USA.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "An algorithm for suffix stripping", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "F" |
| ], |
| "last": "Porter", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "313--316", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. F. Porter, 1997. An algorithm for suffix stripping, pages 313-316. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Authorship attribution using probabilistic context-free grammars", |
| "authors": [ |
| { |
| "first": "Adriana", |
| "middle": [], |
| "last": "Sindhu Raghavan", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [], |
| "last": "Kovashka", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the ACL 2010 Conference Short Papers, ACLShort '10", |
| "volume": "", |
| "issue": "", |
| "pages": "38--42", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sindhu Raghavan, Adriana Kovashka, and Raymond Mooney. 2010. Authorship attribution using prob- abilistic context-free grammars. In Proceedings of the ACL 2010 Conference Short Papers, ACLShort '10, pages 38-42, Stroudsburg, PA, USA.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Automatic authorship attribution", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Stamatatos", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Fakotakis", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Kokkinakis", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics, EACL '99", |
| "volume": "", |
| "issue": "", |
| "pages": "158--164", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Stamatatos, N. Fakotakis, and G. Kokkinakis. 1999. Automatic authorship attribution. In Proceedings of the ninth conference on European chapter of the As- sociation for Computational Linguistics, EACL '99, pages 158-164, Stroudsburg, PA, USA.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A survey of modern authorship attribution methods", |
| "authors": [ |
| { |
| "first": "Efstathios", |
| "middle": [], |
| "last": "Stamatatos", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "J. Am. Soc. Inf. Sci. Technol", |
| "volume": "60", |
| "issue": "", |
| "pages": "538--556", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Efstathios Stamatatos. 2009. A survey of modern au- thorship attribution methods. J. Am. Soc. Inf. Sci. Technol., 60:538-556, March.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Author attribution of turkish texts by feature mining", |
| "authors": [ |
| { |
| "first": "Filiz", |
| "middle": [], |
| "last": "T\u00fcrko\u01e7lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Banu", |
| "middle": [], |
| "last": "Diri", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Fatih Amasyali", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the intelligent computing 3rd international conference on Advanced intelligent computing theories and applications, ICIC'07", |
| "volume": "", |
| "issue": "", |
| "pages": "1086--1093", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Filiz T\u00fcrko\u01e7lu, Banu Diri, and M. Fatih Amasyali. 2007. Author attribution of turkish texts by feature mining. In Proceedings of the intelligent computing 3rd international conference on Advanced intelli- gent computing theories and applications, ICIC'07, pages 1086-1093, Berlin, Heidelberg. Springer- Verlag.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "num": null, |
| "text": "Examples of LCSH Categories.", |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Music And Books On Music, Philosophy, Psychology, Fine</td></tr><tr><td>Arts, Religion, Auxiliary Sciences Of History, Language</td></tr><tr><td>And Literature, World History (Non Americas), Science,</td></tr><tr><td>History Of The Americas, Medicine, Geography, Anthropol-</td></tr><tr><td>ogy, Agriculture, Recreation, Social Sciences, Technology,</td></tr><tr><td>Political Science, ...</td></tr></table>" |
| }, |
| "TABREF1": { |
| "num": null, |
| "text": "", |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF3": { |
| "num": null, |
| "text": "", |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td colspan=\"5\">: Author statistics. Numbers in parentheses</td></tr><tr><td colspan=\"5\">(x, y) under LCC and LCSH columns indicate the</td></tr><tr><td colspan=\"5\">number of books categorized (x) and the number</td></tr><tr><td colspan=\"5\">of unique categories the author has written in (y).</td></tr><tr><td>Features</td><td>Acc</td><td>Prec</td><td>Rec</td><td>F1</td></tr><tr><td>NGram</td><td colspan=\"4\">61.22 64.75 59.51 58.02</td></tr><tr><td>TfIaf</td><td colspan=\"4\">90.82 94.69 91.54 92.10</td></tr><tr><td>TfIafTpf</td><td colspan=\"4\">84.69 86.02 85.61 84.96</td></tr><tr><td>POSGram</td><td colspan=\"4\">91.84 93.19 91.22 91.51</td></tr><tr><td colspan=\"5\">MoodWord 95.92 94.99 96.28 95.22</td></tr><tr><td>StopWord</td><td colspan=\"4\">97.96 99.21 97.92 98.45</td></tr><tr><td>All</td><td colspan=\"4\">93.88 95.30 94.68 94.41</td></tr></table>" |
| }, |
| "TABREF4": { |
| "num": null, |
| "text": "", |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF7": { |
| "num": null, |
| "text": "", |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>: Semi-Disjoint Topics using LCC</td></tr><tr><td>(Experiment-3)</td></tr></table>" |
| }, |
| "TABREF8": { |
| "num": null, |
| "text": "55.33 55.50 53.07 TfIaf 86.49 89.00 89.39 87.35 TfIafTpf 94.59 95.00 96.36 95.33 POSGram 64.86 69.57 71.17 69.33 MoodWord 81.08 83.83 83.12 81.84", |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Features</td><td>Acc</td><td>Prec</td><td>Rec</td><td>F1</td></tr><tr><td colspan=\"5\">NGram 56.76 StopWord 97.30 97.50 97.14 97.13</td></tr><tr><td>All</td><td colspan=\"4\">97.30 97.50 98.18 97.71</td></tr></table>" |
| } |
| } |
| } |
| } |