{ "paper_id": "C02-1045", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:19:10.184334Z" }, "title": "A Method of Cluster-Based Indexing of Textual Data", "authors": [ { "first": "Akiko", "middle": [], "last": "Aizawa", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Informatics", "location": {} }, "email": "akiko@nii.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a framework for clustering in text-based information retrieval systems. The prominent feature of the proposed method is that documents, terms, and other related elements of textual information are clustered simultaneously into small overlapping clusters. In the paper, the mathematical formulation and implementation of the clustering method are briefly introduced, together with some experimental results.", "pdf_parse": { "paper_id": "C02-1045", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a framework for clustering in text-based information retrieval systems. The prominent feature of the proposed method is that documents, terms, and other related elements of textual information are clustered simultaneously into small overlapping clusters. In the paper, the mathematical formulation and implementation of the clustering method are briefly introduced, together with some experimental results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper is an attempt to provide a view of indexing as a process of generating many small clusters overlapping with each other. Individual clusters, referred to as micro-clusters in this paper, contain multiple subsets of associated elements, such as documents, terms, authors, keywords, and other related attribute sets. For example, a cluster in Figure 1 represents 'a set of documents written by a specific community of authors related to a subject represented by a set of terms'.", "cite_spans": [], "ref_spans": [ { "start": 351, "end": 359, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our motivations for considering such clusters are that (i) the universal properties of textbased information spaces, namely large scale, sparseness, and local redundancy (Joachims, 2001) , may be better manipulated by focusing on only limited sub-regions of the space; and also that (ii) the multiple viewpoints of information contents, which a conventional retrieval system provides, can be better utilized by considering not only the relations between 'documents' and 'terms' but also associations between other attributes such as 'authors' within the same unified framework.", "cite_spans": [ { "start": 170, "end": 186, "text": "(Joachims, 2001)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Based on the background, this paper presents a framework of micro-clustering, within which we adopt a probabilistic formulation of co- occurrences of textual elements. For simplicity, we focus primarily on the co-occurrences between 'documents' and 'terms' in our explanation, but the presented framework is directly applicable to more general cases with more than two attributes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In information retrieval research, matrix transformation-based indexing methods such as Latent Semantic Indexing (LSI) (Deerwester et al., 1990) have recently become quite common. These methods can be viewed as an established basis for exposing hidden associations between documents and terms. However, their objective is to generate a compact representation of the original information space, and it is likely in consequence that the resulting orthogonal vectors are dense with many non-zero elements (Dhillon and Modha, 1999) . In addition, because the reduction process is globally optimized, matrix transformation-based methods become computationally infeasible when dealing with high-dimensional data.", "cite_spans": [ { "start": 119, "end": 144, "text": "(Deerwester et al., 1990)", "ref_id": "BIBREF3" }, { "start": 502, "end": 527, "text": "(Dhillon and Modha, 1999)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Background Issues A view from indexing", "sec_num": "2" }, { "text": "The document-clustering problem has also been extensively studied in the past (Iwayama and Tokunaga, 1995; Steinbach et al., 2000) . The majority of the previous approaches to clustering construct either a partition or a hierarchy of target documents, where the generated clusters are either exclusive or nested. However, generating mutually exclusive or tree-structured clusters in general is a hard-constrained problem and thus is likely to suffer high computational costs when dealing with large-scale data. Also, such a constraint is not necessarily required in actual applications, because 'topics' of documents, or rather 'indices' in our context, are arbitrarily overlapped in nature (Zamir and Etzioni, 1998) .", "cite_spans": [ { "start": 78, "end": 106, "text": "(Iwayama and Tokunaga, 1995;", "ref_id": "BIBREF6" }, { "start": 107, "end": 130, "text": "Steinbach et al., 2000)", "ref_id": "BIBREF11" }, { "start": 691, "end": 716, "text": "(Zamir and Etzioni, 1998)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "A view from clustering", "sec_num": null }, { "text": "Based on the above observations, our basic strategy is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Strategy:", "sec_num": null }, { "text": "\u2022 Instead of generating component vectors with many non-zero elements, produce only limited subsets of elements, i.e., micro-clusters, with significance weights. \u2022 Instead of transforming the entire cooccurrence matrix into a different feature space, extract tightly associated sub-structures of the elements on the graphical representation of the matrix. \u2022 Use entropy-based criteria for cluster evaluation so that the sizes of the generated clusters can be determined independently of other existing clusters. \u2022 Allow the generated clusters to overlap with each other. By assuming that each element can be categorized into multiple clusters, we can reduce the problem to a feasible level where the clusters are processed individually.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Strategy:", "sec_num": null }, { "text": "Another important aspect of the proposed micro-clustering scheme is that the method employs simultaneous clustering of its composing elements. This not only enables us to combine issues in term indexing and document clustering, as mentioned above, but also is useful for connecting matrix-based and graph-based notions of clustering; the latter is based on the association networks of the elements extracted from the original co-occurrence matrices. Some recent topics dealing with this sort of duality and/or graphical views include: the Information Bottleneck Method (Slonim and Tishby, 2000) , Conceptual Indexing (Dhillon and Modha, 1999; Karypis and Han, 2000) , and Bipartite Spectral Graph Partitioning (Dhillon, 2001) , although each of these follows its own mathematical formulation.", "cite_spans": [ { "start": 569, "end": 594, "text": "(Slonim and Tishby, 2000)", "ref_id": "BIBREF10" }, { "start": 617, "end": 642, "text": "(Dhillon and Modha, 1999;", "ref_id": "BIBREF4" }, { "start": 643, "end": 665, "text": "Karypis and Han, 2000)", "ref_id": "BIBREF8" }, { "start": 710, "end": 725, "text": "(Dhillon, 2001)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related studies:", "sec_num": null }, { "text": "Let D = {d 1 , \u2022 \u2022 \u2022 , d N } be a collection of N tar- get documents, and let S D be a subset of doc- uments such that S D \u2286 D. Likewise, let T = {t 1 , \u2022 \u2022 \u2022 , t M }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of Micro-Clusters", "sec_num": "3.1" }, { "text": "be a set of M distinct terms that appear in the target document collection, and let S T be a subset of terms such that S T \u2286 T . A cluster, denoted as c, is defined as a combination of S T and S D :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of Micro-Clusters", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c = (S T , S D ).", "eq_num": "(1)" } ], "section": "Definition of Micro-Clusters", "sec_num": "3.1" }, { "text": "The co-occurrences of terms and documents can be expressed as a matrix of size M \u00d7N in which", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of Micro-Clusters", "sec_num": "3.1" }, { "text": "the (i, j)-th cell indicates that t i (\u2208 T ) appears in d j (\u2208 D).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of Micro-Clusters", "sec_num": "3.1" }, { "text": "We make the value of the (i, j)-th cell equal to f req(t i , d j ). Although we primarily assume the value is either '1' (exist) or '0' (not exist) in this paper, our formulation could easily be extended to the cases where f req(t i , d j ) represents the actual number of times that t i appears in d j . The observed total frequency of t i over all the documents in D is denoted as f req(t i , D). Similarly, the observed total frequency of d j , i.e. the total number of terms contained in d j , is denoted as f req(T, d j ). These values correspond to summations of the columns and the rows of the co-occurrence matrix. The total frequency of all the documents is denoted as f req (T, D) . Thus,", "cite_spans": [ { "start": 684, "end": 690, "text": "(T, D)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Definition of Micro-Clusters", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "freq(T, D) = t i \u2208T freq(t i , D) = dj\u2208D freq(T, d j ) = t i \u2208T dj\u2208D freq(t i , d j ).", "eq_num": "(2)" } ], "section": "Definition of Micro-Clusters", "sec_num": "3.1" }, { "text": "We sometimes use f req( ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of Micro-Clusters", "sec_num": "3.1" }, { "text": "t i ) for f req(t i , D), f req(d j ) for f req(T, d j ) and F for f req(T, D).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition of Micro-Clusters", "sec_num": "3.1" }, { "text": "The view of the co-occurrence matrix can be further extended by assigning probabilities to each cell. With the probabilistic formulation, t i and d j are considered as independently observed events, and their combination as a single co-occurrence event (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "t i , d j ). Then, a cluster c = (S T , S D )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "is also considered as a single cooccurrence event of observing one of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "t i \u2208 S T within one of d j \u2208 S D .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "In estimating the probability of each event, we use a simple discounting method similar to the absolute discounting in probabilistic language modeling studies (Baayen, 2001) . The method subtracts a constant value \u03b4, called a discounting coefficient, from all the observed term frequencies and estimates the probability of t i as:", "cite_spans": [ { "start": 159, "end": 173, "text": "(Baayen, 2001)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (t i ) = freq(t i ) \u2212 \u03b4 F .", "eq_num": "(3)" } ], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "Note that the discounting effect is stronger for low-frequency terms. For high-frequency terms,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "P (t i ) \u2248 f req(t i )/F .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "In the original definition, the value of \u03b4 was uniquely determined, for example as \u03b4 = m(1) M with m(1) being the number of terms that appear exactly once in the text. However, we experimentally vary the value of \u03b4 in our study, because it is an essential factor for controlling the size and quality of the generated clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "Assuming that the probabilities assigned to documents are not affected by the discounting,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "P (d j |t i ) = f req(t i , d j ) / freq(t i ). Then, apply- ing P (t i , d j ) = P (d j |t i )P (t i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": ", the co-occurrence probability of t i and d j is given as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (t i , d j ) = freq(t i ) \u2212 \u03b4 freq(t i ) \u2022 freq(t i , d j ) F .", "eq_num": "(4)" } ], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "Similarly, the co-occurence probability of S T and S D is given as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (S T , S D ) = freq(S T ) \u2212 \u03b4 freq(S T ) \u2022 freq(S T , S D ) F .", "eq_num": "(5)" } ], "section": "Probabilistic Formulation", "sec_num": "3.2" }, { "text": "The evaluation is based on the information theoretic view of the retrieval systems (Aizawa, 2000) . Let T and D be two random variables corresponding to the events of observing a term and a document, respectively. Denote their occurrence probabilities as P (T ) and P (D), and their co-occurrence probability as a joint distribution P (T , D). By the general definition of traditional information theory, the mutual information between T and D, denoted as I(T , D), is calculated as:", "cite_spans": [ { "start": 83, "end": 97, "text": "(Aizawa, 2000)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "I(T , D) = ti\u2208T dj \u2208D P (t i , d j )log P (t i , d j ) P (t i )P (d j ) ,", "eq_num": "(6)" } ], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "where the values of P (t i , d j ) and P (t i ) are calculated using Eqs. (3) and (4). d j ) , or approximated simply by P (d j ) = f req(d j )/F . Next, the mutual information after agglomerating S T and S D into a single cluster (Figure 2 ) is calculated as:", "cite_spans": [], "ref_spans": [ { "start": 87, "end": 92, "text": "d j )", "ref_id": null }, { "start": 231, "end": 240, "text": "(Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "P (d j ) is deter- mined by P (d j ) = t i \u2208T P (t i ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "I (T , D) = ti / \u2208ST dj / \u2208SD P (t i , d j )log P (t i , d j ) P (t i )P (d j ) +P (S T , S D )log P (S T , S D ) P (S T )P (S D ) ,", "eq_num": "(7)" } ], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "P (S T ) = t i \u2208S T P (t i ) and P (S D ) = d j \u2208S D P (d j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "The fitness of a cluster, denoted as \u03b4I(S T , S D ), is defined as the difference of the two information values given by Eqs. 6and 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b4I(S T , S D ) = I (T , D) \u2212 I(T , D) = P (S T , S D )log P (S T , S D ) P (S T )P (S D ) \u2212 t i \u2208S T dj \u2208SD P (t i , d j )log P (t i , d j ) P (t i )P (d j ) .", "eq_num": "(8)" } ], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "Without discounting, the value of \u03b4I(S T , S D ) in the above equation is always negative or zero. However, with discounting, the value becomes positive for uniformly dense clusters, because the frequencies of individual cells are always smaller than their agglomeration and so the discounting effect is stronger for the former. Using the same formula, we calculated the significance weights t i in c = (S T , S D ) as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b4I(t i , S D ) = dj\u2208SD P (t i , d j )log P (t i , d j ) P (t i )P (d j ) ,", "eq_num": "(9)" } ], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "and the significance weights of d j as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b4I(S T , d j ) = t i \u2208S T P (t i , d j )log P (t i , d j ) P (t i )P (d j ) .", "eq_num": "(10)" } ], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "In other words, all the terms and documents in a cluster can be jointly ordered according to their contribution in the entropy calculation given by Eq. (7). To summarize, the proposed probabilistic formulation has the following two major features. First, clustering is generally defined as an operation of agglomerating a group of cells in the contingency table. Such an interpretation is unique because existing probabilistic approaches, including those with a duality view, agglomerate entire rows or columns of the contingency table all at once. Second, the estimation of the occurrence probability is not simply in proportion to the observed frequency. The discounting scheme enables us to trade off (i) the loss of averaging probabilities in the agglomerated clusters, and (ii) the improvement of probability estimations by using larger samples sizes after agglomeration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "It should be noted that although we have restricted our focus to one-to-one correspondences between terms and documents, the proposed framework can be directly applicable to more general cases with k(\u2265 2) attributes. Namely, given k random variables 8can be extended as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "X 1 , \u2022 \u2022 \u2022 , X k , Eq.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "\u03b4I(SX 1 , \u2022 \u2022 \u2022 , SX k ) = P (S X 1 , \u2022 \u2022 \u2022 , SX k )log P (S X 1 , \u2022 \u2022 \u2022 , SX k ) P (S X 1 ) \u2022 \u2022 \u2022 P (S X k ) (11) \u2212 x 1 \u2208S X 1 \u2022 \u2022 \u2022 x k \u2208S X k P (x1, \u2022 \u2022 \u2022 , xk)log P (x 1 , \u2022 \u2022 \u2022 , xk) P (x1) \u2022 \u2022 \u2022 P (xk) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criteria for Cluster Evaluation", "sec_num": "3.3" }, { "text": "The cluster generation process is defined as the repeated iterations of cluster initiation and cluster improvement steps (Aizawa, 2002) . First, in the cluster initiation step, a single term t i is selected, and an initial cluster is then formulated by collecting documents that contain t i and terms that co-occur with t i within the same document. The collected subsets, respectively, become S D and S T of the initiated cluster. On the bipartite graph of terms and documents (Figure 2 ), the process can be viewed as a two-step expansion starting from t i .", "cite_spans": [ { "start": 121, "end": 135, "text": "(Aizawa, 2002)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 478, "end": 487, "text": "(Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Cluster Generation Procedure", "sec_num": "3.4" }, { "text": "Next, in the cluster improvement step, all the terms and documents in the initial cluster are tested for elimination in the order of increasing significance weights given by Eqs. (9) and (10). If the performance of the target cluster is improved after the elimination, then the corresponding term or document is removed. When finished with all the terms and documents in the cluster, the newly generated cluster is tested to see whether the evaluation value given by Eq. (8) is positive. Clusters that do not satisfy this condition are discarded. Note that the resulting cluster is only locally optimized, as the improvement depends on the order of examining terms and documents for elimination.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cluster Generation Procedure", "sec_num": "3.4" }, { "text": "At the initiation step, instead of randomly selecting an initiating term, our current implementation enumerates all the existing terms t i \u2208 T . We also limit the sizes of S T and S D to k max = 50 to avoid explosive computation caused by high frequency terms. Except for k max , the discounting coefficient \u03b4 is the only parameter that controls the sizes of the generated clusters. The effect of \u03b4 is examined in detail in the following experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cluster Generation Procedure", "sec_num": "3.4" }, { "text": "In our experiments, we used NTCIR-J1 1 , a Japanese text collection for retrieval tasks that is composed of abstracts of conference papers organized by Japanese academic societies. In preparing the data for the experiments, we first selected 52,867 papers from five different societies: 23,105 from the Society of Polymer Science, Japan (SPSJ), 20,482 from the Japan Society of Civil Engineers (JSCE), 4,832 from the Japan Society for Precision Engineering (JSPE), 2,434 from the Ecological Society of Japan (ESJ), and 2,014 from the Japanese Society for Artificial Intelligence (JSAI).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Data Set", "sec_num": "4.1" }, { "text": "The papers were then analyzed by the morphological analyzer ChaSen Ver.2.02 (Matsumoto et al., 1999) to extract nouns and compound nouns using the Part-Of-Speech tags. Next, the co-occurrence frequencies between documents and terms were collected. After preprocessing, the number of distinctive terms was 772,852 for the 52,867 documents.", "cite_spans": [ { "start": 76, "end": 100, "text": "(Matsumoto et al., 1999)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "The Data Set", "sec_num": "4.1" }, { "text": "In our first experiments, we used a framework of unsupervised text categorization, where the quality of the generated clusters was evaluated 1 http://research.nii.ac.jp/ntcir/ by the goodness of the separation between different societies. To investigate the effect of the discounting parameter, it was given the values \u03b4 = 0.1, 0.3, 0.5, 0.7, 0.9, 0.95. Table 1 compares the total number of generated clusters (c), the average number of documents per cluster (s d ), and the average number of terms per cluster (s t ), for different values of \u03b4. We also examined the ratio of unique clusters that consist only of documents from a single society (r s ), and an inside-cluster ratio that is defined as the average relative weight of the dominant society for each cluster (r i ). Here, the weight of each society within a cluster was calculated as the sum of the significance weights of its component documents given by Eq. (10).", "cite_spans": [], "ref_spans": [ { "start": 354, "end": 361, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Clustering Results", "sec_num": "4.2" }, { "text": "The results shown in Table 1 indicate that reducing the value of \u03b4 improves the quality of the generated clusters: with smaller \u03b4, the single society ratio and the inside-cluster ratio becomes higher, while the number of generated clusters becomes smaller. ", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Clustering Results", "sec_num": "4.2" }, { "text": "In our second experiment, we used a framework of supervised text categorization, where the generated clusters were used as indices for classifying documents between the existing societies, and the categorization performance was examined. For this purpose, the documents were first divided into a training set of 50,182 documents and a test set of 2,641 documents. Then, assuming that the originating societies of the training documents are known, the significance weights of the five societies were calculated for each cluster generated in the previous experiments. Next, the test documents were assigned to one of the five societies based on the membership of the multiple clusters to which they belong.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorization Results", "sec_num": "4.3" }, { "text": "For comparison, two supervised text categorization methods, naive Bayes and Support Vector Machine (SVM), were also applied to the same training and test sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorization Results", "sec_num": "4.3" }, { "text": "The results are shown in Table 2 . In this case, the performance was better for larger \u03b4, indicating that the major factor determining the categorization performance was the number of clusters rather than their quality. For \u03b4 = 0.5 \u223c 0.95, each tested document appeared in at least one of the generated clusters, and the performance was almost comparable to the performance of standard text categorization methods: slightly better than naive Bayes, but not so good as SVM. We also compared the performance for varied sizes of training sets and also using different combination of societies, but the tendency remained the same. Table 3 compares the patterns of misclassification, where the columns and rows represent the classified and the real categories, respectively. It can be seen that as far as minor categories such as ESJ and JSAI are concerned, the proposed micro-clustering method performed slightly better than SVM. The reason may be that the former method is based on locally conformed clusters and less affected by the skew of the distribution of category sizes. However, the details are left for further investigation.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 627, "end": 634, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Categorization Results", "sec_num": "4.3" }, { "text": "In addition, by manually analyzing the individual misclassified documents, it can be confirmed that most of them dealt with interdomain topics. For example, nine out of the ten JSCE documents misclassified as ESJ were related to environmental issues; six out of the 14 JSPE documents misclassified as JSCE, as well as all seven JSPE documents misclassified as JSAI, were related to the application of artificial intelligence techniques. These were the major causes of the performance difference of the two methods. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of categorization errors", "sec_num": null }, { "text": "We also tested the categorization performance without local improvement where the top 50 terms at most survive unconditionally after forming the initial clusters. In this case, the clustering works similarly to the automatic relevance feedback in information retrieval. Using the same data set, the result was 2,564 correct judgments (F-value 0.971), which shows the effectiveness of local improvement in reducing noise in automatic relevance feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of local improvement:", "sec_num": null }, { "text": "Because we do not apply any duplication check in our generation step, the same cluster may appear repeatedly in the resulting cluster set. We have also tested the other case where clusters with terms or document sets identical to existing better-performing clusters were eliminated. The obtained categorization performance was slightly worse than the one without elimination. For example, the best perfor-mance obtained for \u03b4 = 0.9 was 2,582 correct judgments (F-value 0.978) with 137,867 (30% reduced) clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of cluster duplication check:", "sec_num": null }, { "text": "The results indicate that the system does not necessarily require expensive redundancy checks for the generated clusters as a whole. Such consideration becomes necessary when the formulated clusters are presented to users, in which case, the duplication check can be applied only locally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of cluster duplication check:", "sec_num": null }, { "text": "In this paper, we reported a method of generating overlapping micro-clusters in which documents, terms, and other related elements of text-based information are grouped together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Comparing the proposed micro-clustering method with existing text categorization methods, the distinctive feature of the former is that the documents on borders are readily viewed and examined. In addition, the terms in the cluster can be further utilized in digesting the descriptions of the clustered documents. Such properties of micro-clustering may be particularly important when the system actually interacts with its users.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "For comparison purposes, we have used only the conventional documents-and-terms feature space in our experiments. However, the proposed micro-clustering framework can be applied more flexibly to other cases as well. For example, we have also generated clusters using the co-occurrences of the triple of documents, terms, and authors. Although the performance was not much different in terms of text categorization (2,584 correct judgments out of 2,639 judgments, the precision slightly improved), we can confirm that many of the highly ranked clusters contain documents produced by the same group of authors, emphasizing the characteristics of such generated clusters. Future issues include: (i) enhancing the probabilistic models considering other discounting techniques in linguistic studies; (ii) developing a strategy for initiating clusters by combining different attribute sets, such as documents or authors; and also (iii) establishing a method of evaluating overlapping clusters. We are also looking into the possibility of applying the proposed framework to Web document clustering problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The feature quantity: An information theoretic perspective of tfidf-like measures", "authors": [ { "first": "A", "middle": [], "last": "Aizawa", "suffix": "" } ], "year": 2000, "venue": "Proc. of ACM SIGIR 2000", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Aizawa. 2000. The feature quantity: An informa- tion theoretic perspective of tfidf-like measures. In Proc. of ACM SIGIR 2000, pages 104-111.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An approach to microscopic clustering of terms and documents", "authors": [ { "first": "A", "middle": [], "last": "Aizawa", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 7th Pacific Rim Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Aizawa. 2002. An approach to microscopic clus- tering of terms and documents. In Proc. of the 7th Pacific Rim Conference on Artificial Intelligence (to appear).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Word frequency distributions", "authors": [ { "first": "R", "middle": [ "H" ], "last": "Baayen", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. H. Baayen. 2001. Word frequency distributions. Kluwer Academic Publishers.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Indexing by latent semantic analysis", "authors": [ { "first": "S", "middle": [], "last": "Deerwester", "suffix": "" }, { "first": "S", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "G", "middle": [ "W" ], "last": "Furnas", "suffix": "" }, { "first": "T", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "R", "middle": [], "last": "Harshman", "suffix": "" } ], "year": 1990, "venue": "Journal of American Society of Information Science", "volume": "41", "issue": "", "pages": "391--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman. 1990. Indexing by latent semantic analysis. Journal of American So- ciety of Information Science, 41:391-407.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Concept decomposition for large sparse text data using clustering", "authors": [ { "first": "I", "middle": [ "S" ], "last": "Dhillon", "suffix": "" }, { "first": "D", "middle": [ "S" ], "last": "Modha", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. S. Dhillon and D. S. Modha. 1999. Concept decomposition for large sparse text data using clustering. Technical Report Research Report RJ 10147, IBM Almaden Research Center.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Co-clustering documents and words using bipartite spectral graph partitioning", "authors": [ { "first": "I", "middle": [ "S" ], "last": "Dhillon", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. S. Dhillon. 2001. Co-clustering documents and words using bipartite spectral graph partitioning. Technical Report 2001-05, UT Austin CS Dept.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Cluster-based text categorization: a comparison of category search strategies", "authors": [ { "first": "M", "middle": [], "last": "Iwayama", "suffix": "" }, { "first": "T", "middle": [], "last": "Tokunaga", "suffix": "" } ], "year": 1995, "venue": "Proc. of ACM SIGIR'95", "volume": "", "issue": "", "pages": "273--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Iwayama and T. Tokunaga. 1995. Cluster-based text categorization: a comparison of category search strategies. In Proc. of ACM SIGIR'95, pages 273-281.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A statistical learning model of text classification for support vector machines", "authors": [ { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2001, "venue": "Proc. of ACM SIGIR", "volume": "", "issue": "", "pages": "128--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Joachims. 2001. A statistical learning model of text classification for support vector machines. In Proc. of ACM SIGIR 2001, pages 128-136.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Fast supervised dimensionality reduction algorithm with applications to document categorization and retrieval", "authors": [ { "first": "G", "middle": [], "last": "Karypis", "suffix": "" }, { "first": "E.-H", "middle": [], "last": "Han", "suffix": "" } ], "year": 2000, "venue": "Proc. of the 9th ACM International Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "12--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Karypis and E.-H. Han. 2000. Fast supervised dimensionality reduction algorithm with applica- tions to document categorization and retrieval. In Proc. of the 9th ACM International Conference on Information and Knowledge Management, pages 12-19.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Morphological analysis system chasen 2.0.2 users manual", "authors": [ { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" }, { "first": "A", "middle": [], "last": "Kitauchi", "suffix": "" }, { "first": "T", "middle": [], "last": "Yamashita", "suffix": "" }, { "first": "Y", "middle": [], "last": "Hirano", "suffix": "" }, { "first": "K", "middle": [], "last": "Matsuda", "suffix": "" }, { "first": "M", "middle": [], "last": "Asahara", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Matsumoto, A. Kitauchi, T. Yamashita, Y. Hi- rano, K. Matsuda, and M. Asahara. 1999. Mor- phological analysis system chasen 2.0.2 users manual. NAIST Technical Report NAIST-IS- TR99012, Nara Institute of Science and Technol- ogy.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Document clustering using word clusters via the information bottleneck method", "authors": [ { "first": "N", "middle": [], "last": "Slonim", "suffix": "" }, { "first": "N", "middle": [], "last": "Tishby", "suffix": "" } ], "year": 2000, "venue": "Proc. of ACM SIGIR 2000", "volume": "", "issue": "", "pages": "2008--2015", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Slonim and N. Tishby. 2000. Document cluster- ing using word clusters via the information bot- tleneck method. In Proc. of ACM SIGIR 2000, pages 2008-2015.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A comparison of document clustering techniques", "authors": [ { "first": "M", "middle": [], "last": "Steinbach", "suffix": "" }, { "first": "G", "middle": [], "last": "Karypis", "suffix": "" }, { "first": "V", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2000, "venue": "KDD Workshop on Text Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Steinbach, G. Karypis, and V. Kumar. 2000. A comparison of document clustering techniques. In KDD Workshop on Text Mining.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Web document clustering: A feasibility demonstration", "authors": [ { "first": "O", "middle": [], "last": "Zamir", "suffix": "" }, { "first": "O", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 1998, "venue": "Proc. of ACM SIGIR'98", "volume": "", "issue": "", "pages": "46--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Zamir and O. Etzioni. 1998. Web document clus- tering: A feasibility demonstration. In Proc. of ACM SIGIR'98, pages 46-54.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Cluster-based indexing of information spaces.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Example of a cluster defined on a co-occurrence matrix.When a cluster c is being considered, T and D in the above definitions are changed to S T and S D . In this case, f req(t i , S D ) and f req(S T , d j ) represent the frequencies of t i and d j within c = (S T , S D ), respectively. In the co-occurrence matrix, a cluster is expressed as a 'rectangular' region if terms and documents are so permuted(Figure 2).", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "content": "
\u03b4cs ds tr sr i
0.10 136,8323.259.3 0.953 0.983
0.30 187,0793.94 29.4 0.896 0.960
0.50 196,2084.81 39.7 0.866 0.951
0.70 196,9115.39 44.4 0.851 0.948
0.90 197,1645.81 46.3 0.841 0.945
0.95 197,1935.89 46.6 0.839 0.944
", "text": "Summary of clustering results.", "html": null, "type_str": "table", "num": null }, "TABREF1": { "content": "
\u03b4correct judge F-value
0.102,370 2,4460.932
0.302,520 2,6230.957
0.502,575 2,6410.975
0.702,583 2,6410.978
0.902,584 2,6410.978
0.952,583 2,6410.978
naive Bayes2,579 2,6410.977
SVM2,602 2,6410.985
", "text": "Summary of categorization results.", "html": null, "type_str": "table", "num": null }, "TABREF2": { "content": "
(a) Micro-clustering results
j u d g e
SPSJ JSCE JSPE ESJ JSAI
r SPSJ 11467200
e JSCE5 10071101
a JSPE31421617
l ESJ010 1200
JSAI031195
(b) Text categorization results
j u d g e
SPSJ JSCE JSPE ESJ JSAI
r SPSJ 11502300
e JSCE2 1017122
a JSPE5922610
l ESJ020 1190
JSAI136090
", "text": "Analysis of miss-classification.", "html": null, "type_str": "table", "num": null } } } }