| { |
| "paper_id": "R13-1041", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:56:34.125985Z" |
| }, |
| "title": "A Boosting-based Algorithm for Classification of Semi-Structured Text using Frequency of Substructures", |
| "authors": [ |
| { |
| "first": "Tomoya", |
| "middle": [], |
| "last": "Iwakura", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Fujitsu Laboratories Ltd", |
| "location": {} |
| }, |
| "email": "iwakura.tomoya@jp.fujitsu.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Research in text classification currently focuses on challenging tasks such as sentiment classification, modality identification, and so on. In these tasks, approaches that use a structural representation, like a tree, have shown better performance rather than a bag-of-words representation. In this paper, we propose a boosting algorithm for classifying a text that is a set of sentences represented by tree. The algorithm learns rules represented by subtrees with their frequency information. Existing boostingbased algorithms use subtrees as features without considering their frequency because the existing algorithms targeted a sentence rather than a text. In contrast, our algorithm learns how the occurrence frequency of each subtree is important for classification. Experiments on topic identification of Japanese news articles and English sentiment classification shows the effectiveness of subtree features with their frequency.", |
| "pdf_parse": { |
| "paper_id": "R13-1041", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Research in text classification currently focuses on challenging tasks such as sentiment classification, modality identification, and so on. In these tasks, approaches that use a structural representation, like a tree, have shown better performance rather than a bag-of-words representation. In this paper, we propose a boosting algorithm for classifying a text that is a set of sentences represented by tree. The algorithm learns rules represented by subtrees with their frequency information. Existing boostingbased algorithms use subtrees as features without considering their frequency because the existing algorithms targeted a sentence rather than a text. In contrast, our algorithm learns how the occurrence frequency of each subtree is important for classification. Experiments on topic identification of Japanese news articles and English sentiment classification shows the effectiveness of subtree features with their frequency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Text classification is used to classify texts such as news articles, E-mails, social media posts, and so on. A number of machine learning algorithms have been applied to text classification successfully. Text classification handles not only tasks to identify topics, such as politics, finance, sports or entertainment, but also challenging tasks such as categorization of customer E-mails and reviews by types of claims, subjectivity or sentiment (Wiebe, 2000; Banea et al., 2010; Bandyopadhyay and Okumura, 2011) . To identify difficult categories on challenging tasks, a traditional bag-of-words representation may not be sufficient. Therefore, a richer, structural representation is used rather than the traditional bag-of-words. A straightforward way to extend the traditional bag-of-words representation is to heuristically add new types of features such as fixed-length n-grams such as word bi-gram or tri-gram, or fixed-length syntactic relations. Instead of such approaches, learning algorithms that handle semi-structured data have become increasingly popular (Kudo and Matsumoto, 2004; Kudo et al., 2005; Ifrim et al., 2008; Okanohara and Tsujii, 2009) . This is due to the fact that these algorithms can learn better substructures for each task from semi-structured texts annotated with parts-of-speech, base-phrase information or syntactic relations.", |
| "cite_spans": [ |
| { |
| "start": 447, |
| "end": 460, |
| "text": "(Wiebe, 2000;", |
| "ref_id": null |
| }, |
| { |
| "start": 461, |
| "end": 480, |
| "text": "Banea et al., 2010;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 481, |
| "end": 513, |
| "text": "Bandyopadhyay and Okumura, 2011)", |
| "ref_id": null |
| }, |
| { |
| "start": 1069, |
| "end": 1095, |
| "text": "(Kudo and Matsumoto, 2004;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 1096, |
| "end": 1114, |
| "text": "Kudo et al., 2005;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1115, |
| "end": 1134, |
| "text": "Ifrim et al., 2008;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1135, |
| "end": 1162, |
| "text": "Okanohara and Tsujii, 2009)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Among such learning algorithms, boostingbased algorithms have the following advantages: Boosting-based learning algorithms have been applied to Natural Language Processing problems successfully, including text classification (Kudo and Matsumoto, 2004) , English syntactic chunking (Kudo et al., 2005) , zero-anaphora resolution (Iida et al., 2006) , and so on. Furthermore, classifiers trained with boosting-based learners have shown faster classification speed (Kudo and Matsumoto, 2004) than Support Vector Machines with a tree kernel (Collins and Duffy, 2002) .", |
| "cite_spans": [ |
| { |
| "start": 225, |
| "end": 251, |
| "text": "(Kudo and Matsumoto, 2004)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 281, |
| "end": 300, |
| "text": "(Kudo et al., 2005)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 328, |
| "end": 347, |
| "text": "(Iida et al., 2006)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 462, |
| "end": 488, |
| "text": "(Kudo and Matsumoto, 2004)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 537, |
| "end": 562, |
| "text": "(Collins and Duffy, 2002)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, existing boosting-based algorithms for semi-structured data, boosting algorithms for classification (Kudo and Matsumoto, 2004) and for ranking (Kudo et al., 2005) , have the following point that can be improved. The weak learners used in these algorithms learn classifiers which do not consider frequency of substructures. This is because these algorithms targeted a sentence as their input rather than a document or text consisting of two or more sentences. Therefore, even if crucial substructures appear several times in their target texts, these algorithms cannot reflect such frequency. For example, on sentiment classification, different types of negative expressions may be preferred to a positive expression which ap-pears several times. As a result, it may happen that a positive text using the same positive expression several times with some types of negative expressions is classified as a negative text because consideration of frequency is lacking.", |
| "cite_spans": [ |
| { |
| "start": 109, |
| "end": 135, |
| "text": "(Kudo and Matsumoto, 2004)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 152, |
| "end": 171, |
| "text": "(Kudo et al., 2005)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper proposes a boosting-based algorithm for semi-structured data that considers the occurrence frequency of substructures. To simplify the problem, we first assume that a text to be classified is represented as a set of sentences represented by labeled ordered trees (Abe et al., 2002) . Word sequence, base-phrase annotation, dependency tree and an XML document can be modeled as a labeled ordered tree. Experiments on topic identification of news articles and sentiment classification confirm the effectiveness of subtree features with their frequency.", |
| "cite_spans": [ |
| { |
| "start": 274, |
| "end": 292, |
| "text": "(Abe et al., 2002)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Prior boosting-based algorithms for semistructured data, such as boosting algorithms for classification (Kudo and Matsumoto, 2004) and for ranking (Kudo et al., 2005) , learns classifiers which do not consider frequency of substructures. Ifrim et. al (Ifrim et al., 2008) proposed a logistic regression model with variable-length N-gram features.", |
| "cite_spans": [ |
| { |
| "start": 104, |
| "end": 130, |
| "text": "(Kudo and Matsumoto, 2004)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 147, |
| "end": 166, |
| "text": "(Kudo et al., 2005)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 251, |
| "end": 271, |
| "text": "(Ifrim et al., 2008)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The logistic regression learns the weights of N-gram features. Compared with these two algorithms, our algorithm learns frequency thresholds to consider occurrence frequency of each subtree.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Okanohara and Tsujii (Okanohara and Tsujii, 2009) proposed a document classification method using all substrings as features. The method uses Suffix arraies (Manber and Myers, 1990) for efficiently using all substrings. Therefore, the trees used in our method are not handled. Their method uses feature types of N-grams features, such as term frequency, inverted document frequency, and so on, in a logistic regression. In contrast, our algorithm differs in the learning of a threshold for feature values. Tree kernel (Collins and Duffy, 2002; Kashima and Koyanagi, 2002) implicitly maps the example represented in a labeled ordered tree into all subtree spaces, and Tree kernel can consider the frequency of subtrees. However, as discussed in the paper (Kudo and Matsumoto, 2004) , when Tree kernel is applied to sparse data, kernel dot products between similar instances become much larger than those between different instances. As a result, this sometimes leads to overfitting in training. In contrast, our boosting algo-rithm considers the frequency of subtrees by learning the frequency thresholds of subtrees. Therefore, we think the problems caused by Tree kernel do not tend to take place because of the difference presented in the boosting algorithm (Kudo and Matsumoto, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 49, |
| "text": "(Okanohara and Tsujii, 2009)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 157, |
| "end": 181, |
| "text": "(Manber and Myers, 1990)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 518, |
| "end": 543, |
| "text": "(Collins and Duffy, 2002;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 544, |
| "end": 571, |
| "text": "Kashima and Koyanagi, 2002)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 754, |
| "end": 780, |
| "text": "(Kudo and Matsumoto, 2004)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 1260, |
| "end": 1286, |
| "text": "(Kudo and Matsumoto, 2004)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3 A Boosting-based Learning Algorithm for Classifying Trees", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We describe the problem treated by our boostingbased learner as follows. Let X be all labeled ordered trees, or simply trees, and Y be a set of labels {\u22121, +1}. A labeled ordered tree is a tree where each node is associated with a label. Each node is also ordered among its siblings. Therefore, there are a first child, second child, third child, and so on (Abe et al., 2002) . Let S be a set of training samples", |
| "cite_spans": [ |
| { |
| "start": 357, |
| "end": 375, |
| "text": "(Abe et al., 2002)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "{(x 1 , y 1 ), ..., (x m , y m )},", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where each example x i \u2208 X is a set of labeled ordered trees, and y i \u2208 Y is a class label.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The goal is to induce a mapping", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "F : X \u2192 Y from S.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Then, we define subtrees (Abe et al., 2002) .", |
| "cite_spans": [ |
| { |
| "start": 25, |
| "end": 43, |
| "text": "(Abe et al., 2002)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Definition 1 Subtree Let u and t be labeled ordered trees. We call t a subtree of u, if there exists a one-to-one mapping \u03c6 between nodes in t to u, satisfying the conditions: (1) \u03c6 preserves the parent relation, (2) \u03c6 preserves the sibling relation, and (3) \u03c6 preserves the labels. We denote t as a subtree of u as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "t \u2286 u .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "If a tree t is not a sbutree of u, we denote it as t \u2288 u .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We define the frequency of the subtree t in u as the number of times t occurs in u and denoted as |t \u2286 u| .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The number of nodes in a tree t is referred as the size of the tree t and denote it as |t| .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To represent a set of labeled ordered trees, we use a single tree created by connecting the trees with the root node of the single tree in this paper. Figure 1 is an example of subtrees of a tree consisting of two sentences \"a b c\" and \"a b\" connected with the root node R \u20dd. The trees in the right box are a portion of subtrees of the left tree. Let u be Figure 1 : A labeled ordered tree and its subtrees. the tree in the left side. For example, the size of the subtree a \u20ddb \u20dd (i.e. | a \u20ddb \u20dd|) is 2 and the frequency | a \u20ddb \u20dd \u2208 u| is also 2. For the subtree a \u20ddc \u20dd, the size | a \u20ddc \u20dd| is also 2, however, the frequency", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 151, |
| "end": 159, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 356, |
| "end": 364, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "| a \u20dd- c \u20dd \u2208 u | is 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "3.2 A Classifier for Trees with the Occurrence Frequency of a Subtree We define a classifier for trees -that is used as weak hypothesis in this paper. A boosting algorithm for classifying trees uses subtree-based decision stumps, and each decision stump learned by the boosting algorithm classifies trees whether a tree is a subtree of trees to be classified or not (Kudo and Matsumoto, 2004) . To consider the frequency of a subtree, we define the following decision stump.", |
| "cite_spans": [ |
| { |
| "start": 366, |
| "end": 392, |
| "text": "(Kudo and Matsumoto, 2004)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Let t and u be trees, z be a positive integer, called frequency threshold, and a and b be a real number, called a confidence value, then a classifier for trees is defined as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 2 Classifier for trees", |
| "sec_num": null |
| }, |
| { |
| "text": "h \u27e8t,z,a,b\u27e9 (u) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 a t \u2286 u \u2227 z \u2264 |t \u2286 u| \u2212a t \u2286 u \u2227 |t \u2286 u| < z b otherwise .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 2 Classifier for trees", |
| "sec_num": null |
| }, |
| { |
| "text": "Each decision stump has a subtree t and its frequency threshold z as a condition of classification, and two scores, a and b. If t is a subtree of u (i.e. t \u2286 u), and the frequency of the subtree |t \u2286 u| is greater than or equal to the frequency threshold z, the score a is assigned to the tree. If u satisfies t \u2286 u and |t \u2286 u| is less than z, the score -a is assigned to the tree. If t is not a subtree of u (i.e. t \u2288 u), the score b is assigned to the tree. This classifier is an extension of decision trees learned by learning algorithms like C4.5 (Quinlan, 1993) for classifying trees. For example, C4.5 learns the thresholds for features that have continuous values, and C4.5 uses the thresholds for classifying samples including continuous values. In a similar way, each decision stump for trees uses a frequency threshold for classifying samples with a frequency of a subtree.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 2 Classifier for trees", |
| "sec_num": null |
| }, |
| { |
| "text": "Classifying Trees To induce accurate classifiers, a boosting algorithm is applied. Boosting is a method to create a final hypothesis by repeatedly generating a weak hypothesis in each training iteration with a given weak learner. These weak hypotheses are combined as the final hypothesis. We use real Ad-aBoost used in BoosTexter (Schapire and Singer, 2000) since real AdaBoost-based text classifiers show better performance than other algorithms, such as discrete AdaBoost (Freund and Schapire, 1997) .", |
| "cite_spans": [ |
| { |
| "start": 331, |
| "end": 358, |
| "text": "(Schapire and Singer, 2000)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 475, |
| "end": 502, |
| "text": "(Freund and Schapire, 1997)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Our boosting-based learner selects R types of rules for creating a final hypothesis F on several training iterations. The F is defined as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "F (u) = sign( \u2211 R r=1 h \u27e8tr,zrar,br\u27e9 (u)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We use a learning algorithm that learns a subtree and its frequency threshold as a rule from given training samples", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "S = {(x i , y i )} m i=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "and weights over samples {w r,1 , ..., w r,m } as a weak learner. By training the learning algorithm R times with different weights of samples, we obtain R types of rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "w r,i is the weight of sample number i after selecting r \u2212 1 types of rules, where 0<w r,i , 1 \u2264 i \u2264 m and 1 \u2264 r \u2264 R. We set w 1,i to 1/m.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Let W r\u27e8y,\u2264,z\u27e9 (t) be the sum of the weights of samples that satisfy t", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u2286 x i (1 \u2264 i \u2264 m), z \u2264 |t \u2286 x i | and y i = y (y \u2208 {\u00b11}), W r\u27e8y,\u2264,z\u27e9 (t) = \u2211 i\u2208{i \u2032 |t\u2286x i \u2032 } w r,i [[C \u2264 (x i , t, y, z)]],", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "[[C \u2264 (x, t, y, z)]] is [[y i = y \u2227 z \u2264 |t \u2286 x|]]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "and [[\u03c0] ] is 1 if a proposition \u03c0 holds and 0 otherwise. Similarly, let W r\u27e8y,<,z\u27e9 (t) be the sum of the weights of samples that satisfy", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 8, |
| "text": "[[\u03c0]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "t \u2286 x i , |t \u2286 x i | < z and y i = y, W r\u27e8y,<,z\u27e9 (t) = \u2211 i\u2208{i \u2032 |t\u2286x i \u2032 } [[C \u2264 (x i , t, y, z)]],", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "where ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "[[C < (x, t, y, z)]] is [[y i = y \u2227 |t \u2286 x| < z]].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "i = y, W \u00ac r\u27e8y\u27e9 (t) = \u2211 i\u2208{i \u2032 |t\u2288x i \u2032 \u2227y i =y} w r,i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To select a tree t and a frequency threshold z the following gain is used as the criterion:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "gain(t, z) def = | \u221a W r\u27e8+1,z\u27e9 (t) \u2212 \u221a W r\u27e8\u22121,z\u27e9 (t)| + | \u221a W \u00ac r\u27e8+1\u27e9 (t) \u2212 \u221a W \u00ac r\u27e8\u22121\u27e9 (t)|.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To find the decision stump that maximizes gain is the equivalent of finding the decision stump that minimizes the upper bound of the training error for real AdaBoost (Schapire and Singer, 2000; Collins and Koo, 2005) . At boosting round r, a weak learner selects a subtree t r (t r \u2208 X ) and a frequency threshold z r that maximizes gain as a rule from training samples S with the weights of training samples {w r,1 , ..., w r,m }:", |
| "cite_spans": [ |
| { |
| "start": 166, |
| "end": 193, |
| "text": "(Schapire and Singer, 2000;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 194, |
| "end": 216, |
| "text": "Collins and Koo, 2005)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "(t r , z r ) = arg max (t \u2032 ,z \u2032 )\u2208ZT gain(t \u2032 , z \u2032 ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "where ZT is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "{(t, z) | t \u2208 \u222a m i=1 {t|t \u2286 x i } \u2227 1 \u2264 z \u2264 max 1\u2264i\u2264m |t \u2286 x i |}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Then the boosting-based learner calculates the confidence value of t r and updates the weight of each sample. The confidence values a r and b r are defined as follows: (tr) ), and", |
| "cite_spans": [ |
| { |
| "start": 168, |
| "end": 172, |
| "text": "(tr)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "a r = 1 2 log( W r\u27e8+1,z\u27e9 (tr) W r\u27e8\u22121,z\u27e9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "b r = 1 2 log( W \u00ac r\u27e8+1\u27e9 (tr) W \u00ac r\u27e8\u22121\u27e9 (tr) ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "After the calculation of the confidence values for t r and z, the learner updates the weight of each sample with", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "w r+1,i = w r,i exp(\u2212y i h \u27e8tr,zrar,br\u27e9 (x i ))/Z r , (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "where Z r is a normalization factor for \u2211 m i=1 w r+1,i = 1. Then the learner adds t r , z r a r , and b r , to F as the r-th rule and its confidence values. The learner continues training until the algorithm obtains R rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Boosting-based Rule Learning for", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We use an efficient method, rightmost-extension, to enumerate all subtrees from a given tree without duplication (Abe et al., 2002; Zaki, 2002) as", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 131, |
| "text": "(Abe et al., 2002;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 132, |
| "end": 143, |
| "text": "Zaki, 2002)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "## S = {(x i , y i )} m i=1 : x i \u2286X , y i \u2208 {+1} ## W r = {w r,i } m i=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": ": Weights of samples after ## learning r types of rules. w 1,i = 1/m ## r : The current rule number. ## The initial value of r is 1. ## T l : A set of subtrees of size l. ## T 1 is a set of all nodes. procedure BoostingForClassifyingTree() While (r \u2264 R) ## Learning a rule with the weak-learner {t r , z r } = weak-learner(T 1 , S, W r ); ## Update weights with {t r , z r } a r = 1 2 log(", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "W r\u27e8+1,\u2264,zr \u27e9 (tr) W r\u27e8\u22121,\u2264,zr \u27e9 (tr) ) b r = 1 2 log( W \u00ac r\u27e8+1\u27e9 (tr) W \u00ac r\u27e8\u22121\u27e9 (tr) ) ## Update weights. Z r is a normalization ## factor for \u2211 m i=1 w r+1,i = 1. For i=1,..,m w r+1,i = w r,i exp(\u2212y i h \u27e8tr,zrar,br\u27e9 (x i ))/Z r r++; end While return F (u) = sign( \u2211 R r=1 h \u27e8tr,zrar,br\u27e9 (u)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "## learning a rule procedure weak-learner(T l , S, W r ) ## Select the best rule from ## subtrees of size l in T l . (t l , z l ) = selectRule(T l , S, W r ) ## If the selected (t l , z l ) is better than ## current optimal rule (t o , z o ), ## the (t o , z o ) is replaced with (t l , z l ). in (Kudo and Matsumoto, 2004) . The rightmostextension starts with a set of trees consisting of single nodes, and then expands a given tree of size k \u2212 1 by attaching a new node to this tree to obtain trees of size k. The rightmost extension enumerates trees by restricting the position of attachment of new nodes. A new node is added to a node existing on the unique path from the root to the rightmost leaf in a tree, and the new node is added as the rightmost sibling. The details of this method can be found in the papers (Abe et al., 2002; Zaki, 2002) . In addition, the following pruning techniques are applied.", |
| "cite_spans": [ |
| { |
| "start": 297, |
| "end": 323, |
| "text": "(Kudo and Matsumoto, 2004)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 820, |
| "end": 838, |
| "text": "(Abe et al., 2002;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 839, |
| "end": 850, |
| "text": "Zaki, 2002)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "If ( gain(t o , z o ) < gain(t l , z l ) ) (t o , z o ) = (t l , z l ); ##", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Size constraint: We examine subtrees whose size is no greater than a size threshold \u03b6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "A bound of gain: We use a bound of gain u(t):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "u(t) def = max y\u2208{\u00b11}, 1\u2264z\u2264 max 1\u2264i\u2264m |t\u2286x i | \u221a W r\u27e8y,z\u27e9 (t) + max u\u2208{\u00b11} U r\u27e8u\u27e9 (t),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "U r\u27e8u\u27e9 (t) = | \u221a \u2211 m i=1 w r,i [[y i = u]] \u2212 \u221a W \u00ac r\u27e8\u2212u\u27e9 (t)|.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "For any tree t \u2032 \u2208 X that has t as a subtree (i.e. t \u2286 t \u2032 ), the gain(t \u2032 , z) for any frequency thresholds z' of t \u2032 , is bounded under u(t), since, for y \u2208 {\u00b11},", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "| \u221a W r\u27e8+1,z \u2032 \u27e9 (t \u2032 ) \u2212 \u221a W r\u27e8\u22121,z \u2032 \u27e9 (t \u2032 )| \u2264 max( \u221a W r\u27e8+1,z \u2032 \u27e9 (t), \u221a W r\u27e8\u22121,z \u2032 \u27e9 (t)) \u2264 \u221a W r\u27e8y,z\u27e9 (t), 1 and | \u221a W \u00ac r\u27e8+1\u27e9 (t \u2032 ) \u2212 \u221a W \u00ac r\u27e8\u22121\u27e9 (t \u2032 )| \u2264 U r\u27e8u\u27e9 (t), 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "where z, y and u maximize u(t).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Thus, if u(t) is less than or equal to the gain of the current N -th optimal rule \u03c4 , candidates containing t are safely pruned. Figure 2 is a pseudo code representation of our boosting-based algorithm for classifying trees. First, the algorithm sets the initial weights of samples. Then, the algorithm repeats the rule learning procedure until it obtains R rules. At each boosting round, a rule is selected by the weak-learner.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 129, |
| "end": 137, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "\u2211 i\u2208{i \u2032 |t\u2288x i \u2032 \u2227y i =y} wr,i \u2264 \u2211 i\u2208{i \u2032 |t \u2032 \u2288x i \u2032 \u2227y i =y} wr,i \u2264 \u2211 1\u2264i\u2264m wr,i[[yi = y]] for t \u2286 t \u2032", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "and y \u2208 {\u00b11}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The weak-learner starts to select a rule from subtrees of size 1 and the new candidates are generated by rightmost extension. After a rule is selected, the weights are updated with the rule.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Rules Efficiently", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We used the following two data sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Set", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 Japanese news articles: We used Japanese news articles from the collection of news articles of Mainichi Shimbun 2010 which have at least one paragraph 3 and one of the following five categories: business, entertainment, international, sports, and technology. Table 1 shows the statistics of the Mainichi Shimbun data set. The training data is 80% of the selected news articles and test and development data are 10%. We used the text data represented by bag-of-words as well as text data represented by trees in this experiment.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 261, |
| "end": 269, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Set", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To convert sentences in Japanese news articles to trees, we used CaboCha (Kudo and Matsumoto, 2002) , a Japanese dependency parser. 4 Parameters are decided in terms of F-measure on positive samples of the development data, and we evaluate F-measure obtained with the decided parameters. Fmeasure is calculated as 2\u00d7r\u00d7p p+r , where r and p are recall and precision.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 99, |
| "text": "(Kudo and Matsumoto, 2002)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 132, |
| "end": 133, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Set", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 English Amazon review data: This is a data set from (Blitzer et al., 2007) that contains product reviews from Amazon domains. The 5 most frequent categories, book, dvd, electronics, music, and video, are used in this experiment. The goal is to classify a product review as either positive or negative. We used the file, all.review, for each domain in the data set for this evaluation. By following the paper (Blitzer et al., 2007) , review texts that have ratings more than three are used as positive reviews, and review texts that have ratings less than three are used as negative reviews. We used only the text data represented by word sequences in this experiment because a parser could not parse all the text data due to either the lack of memory or the parsing speed. Even if we ran the parser for two weeks, parsing on a data set would not finish. Table 2 shows the statistics of the Amazon data set. Each training data is 80% of samples in all.review of each category, and test and development data are 10%. Parameters are decided in terms of F-measure on negative reviews of the development data, and we evaluate F-measure obtained with the decided parameters. The number of positive reviews in the data set is much larger than negative reviews. Therefore, we evaluated the F-measure of the negative reviews.", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 76, |
| "text": "(Blitzer et al., 2007)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 410, |
| "end": 432, |
| "text": "(Blitzer et al., 2007)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 856, |
| "end": 863, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Set", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To represent a set of sentences represented by labeled ordered trees, we use a single tree created by connecting the sentences with the root node of the single tree.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Set", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To evaluate our classifier, we compare our learning algorithm with an algorithm that does not learn frequency thresholds. For experiments on Mainichi Shimbun, the following two data representations are used: Bag Of Words (BOW) (i.e. \u03b6 = 1), and trees (Tree). For the representations of texts of Amazon data set, BOW and N-gram are used. The parameters, R and \u03b6, are R = 10, 000 and \u03b6 = {2, 3, 4, 5}. Table 3 and Table 4 show the experimental results on the Mainichi Shimbun and on the Amazon data set. +FQ suggests the algorithms learn frequency thresholds, and -FQ suggests the algorithms do not. A McNemars paired test is employed on the labeling disagreements. If there is a statistical difference (p < 0.01) between a boosting (+FQ) and a boosting (-FQ) with the same feature representation, better results are asterisked ( * ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 400, |
| "end": 419, |
| "text": "Table 3 and Table 4", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The experimental results showed that classifiers that consider frequency of subtrees attained better performance. For example, Tree(+FQ) showed better accuracy than Tree(-FQ) on three categories on the Mainichi Shimbun data set. Compared with BOW(+FQ), Tree(+FQ) also showed better performance on four categories.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "On the Amazon data set, N-gram(+FQ) also had better performance than BOW and N-gram(-FQ). N-gram(+FQ) performed better performances than BOW on all five categories, while performing better than N-gram(-FQ) on four categories. These results show that our proposed methods contributed to improved accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "By learning frequency thresholds, classifiers learned by our boosting algorithm can distinguish subtle differences of meanings. The following are some examples observed in rules learned from the book category training data. For example, three types of thresholds for \"great\" were learned. This seems to capture more occurrences of \"great\" indicated positive meaning. For classifying texts as positive, \"I won't read\" with 2 \u2264, which means more than once, was learned. Generally, \"I won't read\" seems to be used in negative reviews. However, reviews in training data include \"I wont' read\" more than once is positive reviews. In a similar way, \"some useful\" and \"some good\" with < 2, which means less than 2 times, were learned for classifying as negative. These two expression can be used in both meanings like \"some good ideas in the book.\" or \"... some good ideas, but for ... \". The learner seems to judge only one time occurrences as a clue for classifying texts as negative.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Examples of Learned Rules", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We have proposed a boosting algorithm that learns rules represented by subtrees with their frequency information. Our algorithm learns how the occurrence frequency of each subtree in texts is important for classification. Experiments with the tasks of sentiment classification and topic identification of new articles showed the effectiveness of subtree features with their frequency. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We see it from W r\u27e8y,z \u2032 \u27e9 (t \u2032 ) \u2264 W r\u27e8y,z \u2032 \u27e9 (t) for t \u2286 t \u2032 and y \u2208 {\u00b11}.2 We see it from W \u00ac r\u27e8y\u27e9 (t) =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "There are articles that do not have body text due to copyright.4 http://code.google.com/p/cabocha/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Optimized substructure discovery for semi-structured data", |
| "authors": [ |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Abe", |
| "suffix": "" |
| }, |
| { |
| "first": "Shinji", |
| "middle": [], |
| "last": "Kawasoe", |
| "suffix": "" |
| }, |
| { |
| "first": "Tatsuya", |
| "middle": [], |
| "last": "Asai", |
| "suffix": "" |
| }, |
| { |
| "first": "Hiroki", |
| "middle": [], |
| "last": "Arimura", |
| "suffix": "" |
| }, |
| { |
| "first": "Setsuo", |
| "middle": [], |
| "last": "Arikawa", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "PKDD'02", |
| "volume": "", |
| "issue": "", |
| "pages": "1--14", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenji Abe, Shinji Kawasoe, Tatsuya Asai, Hiroki Arimura, and Setsuo Arikawa. 2002. Optimized substructure discovery for semi-structured data. In PKDD'02, pages 1-14.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Sentiment Analysis where AI meets Psychology. Asian Federation of Natural Language Processing", |
| "authors": [], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sivaji Bandyopadhyay and Manabu Okumura, editors. 2011. Sentiment Analysis where AI meets Psychol- ogy. Asian Federation of Natural Language Process- ing, Chiang Mai, Thailand, November.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Multilingual subjectivity: are more languages better?", |
| "authors": [ |
| { |
| "first": "Carmen", |
| "middle": [], |
| "last": "Banea", |
| "suffix": "" |
| }, |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. of COLING '10", |
| "volume": "", |
| "issue": "", |
| "pages": "28--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carmen Banea, Rada Mihalcea, and Janyce Wiebe. 2010. Multilingual subjectivity: are more languages better? In Proc. of COLING '10, pages 28-36.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Blitzer", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of ACL'07", |
| "volume": "", |
| "issue": "", |
| "pages": "440--447", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classi- fication. In Proc. of ACL'07, pages 440-447.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Nigel", |
| "middle": [], |
| "last": "Duffy", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of ACL'02", |
| "volume": "", |
| "issue": "", |
| "pages": "263--270", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins and Nigel Duffy. 2002. New rank- ing algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Proc. of ACL'02, pages 263-270.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Discriminative reranking for natural language parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Computational Linguistics", |
| "volume": "31", |
| "issue": "1", |
| "pages": "25--70", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computa- tional Linguistics, 31(1):25-70.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A decision-theoretic generalization of on-line learning and an application to boosting", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Freund", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "E" |
| ], |
| "last": "Schapire", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Journal of computer and system sciences", |
| "volume": "55", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Freund and Robert E. Schapire. 1997. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences 55(1).", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Fast logistic regression for text categorization with variable-length n-grams", |
| "authors": [ |
| { |
| "first": "Georgiana", |
| "middle": [], |
| "last": "Ifrim", |
| "suffix": "" |
| }, |
| { |
| "first": "G\u00f6khan", |
| "middle": [], |
| "last": "Bakir", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "Weikum", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. of KDD'08", |
| "volume": "", |
| "issue": "", |
| "pages": "354--362", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Georgiana Ifrim, G\u00f6khan Bakir, and Gerhard Weikum. 2008. Fast logistic regression for text categorization with variable-length n-grams. In Proc. of KDD'08, pages 354-362.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Exploiting syntactic patterns as clues in zeroanaphora resolution", |
| "authors": [ |
| { |
| "first": "Ryu", |
| "middle": [], |
| "last": "Iida", |
| "suffix": "" |
| }, |
| { |
| "first": "Kentaro", |
| "middle": [], |
| "last": "Inui", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuji", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. of Meeting of Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryu Iida, Kentaro Inui, and Yuji Matsumoto. 2006. Exploiting syntactic patterns as clues in zero- anaphora resolution. In Proc. of Meeting of Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Kernels for semi-structured data", |
| "authors": [ |
| { |
| "first": "Hisashi", |
| "middle": [], |
| "last": "Kashima", |
| "suffix": "" |
| }, |
| { |
| "first": "Teruo", |
| "middle": [], |
| "last": "Koyanagi", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "ICML'02", |
| "volume": "", |
| "issue": "", |
| "pages": "291--298", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hisashi Kashima and Teruo Koyanagi. 2002. Kernels for semi-structured data. In ICML'02, pages 291- 298.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Japanese dependency analysis using cascaded chunking", |
| "authors": [ |
| { |
| "first": "Taku", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuji", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of CoNLL'02", |
| "volume": "", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taku Kudo and Yuji Matsumoto. 2002. Japanese dependency analysis using cascaded chunking. In Proc. of CoNLL'02, pages 1-7.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A boosting algorithm for classification of semi-structured text", |
| "authors": [ |
| { |
| "first": "Taku", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuji", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of EMNLP'04", |
| "volume": "", |
| "issue": "", |
| "pages": "301--308", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taku Kudo and Yuji Matsumoto. 2004. A boosting algorithm for classification of semi-structured text. In Proc. of EMNLP'04, pages 301-308, July.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Boosting-based parse reranking with subtree features", |
| "authors": [ |
| { |
| "first": "Taku", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Isozaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of ACL'05", |
| "volume": "", |
| "issue": "", |
| "pages": "189--196", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taku Kudo, Jun Suzuki, and Hideki Isozaki. 2005. Boosting-based parse reranking with subtree fea- tures. In Proc. of ACL'05, pages 189-196.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Suffix arrays: a new method for on-line string searches", |
| "authors": [ |
| { |
| "first": "Udi", |
| "middle": [], |
| "last": "Manber", |
| "suffix": "" |
| }, |
| { |
| "first": "Gene", |
| "middle": [], |
| "last": "Myers", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Proceedings of the first annual ACM-SIAM symposium on Discrete algorithms, SODA '90", |
| "volume": "", |
| "issue": "", |
| "pages": "319--327", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Udi Manber and Gene Myers. 1990. Suffix arrays: a new method for on-line string searches. In Proceed- ings of the first annual ACM-SIAM symposium on Discrete algorithms, SODA '90, pages 319-327.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Text categorization with all substring features", |
| "authors": [ |
| { |
| "first": "Daisuke", |
| "middle": [], |
| "last": "Okanohara", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "SDM", |
| "volume": "", |
| "issue": "", |
| "pages": "838--846", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daisuke Okanohara and Jun'ichi Tsujii. 2009. Text categorization with all substring features. In SDM, pages 838-846.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "C4.5: Programs for Machine Learning", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Quinlan", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. R. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Boostexter: A boosting-based system for text categorization", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [ |
| "E" |
| ], |
| "last": "Schapire", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Machine Learning", |
| "volume": "39", |
| "issue": "", |
| "pages": "135--168", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert E. Schapire and Yoram Singer. 2000. Boostex- ter: A boosting-based system for text categorization. Machine Learning, 39(2/3):135-168.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "The gain of current optimal rule \u03c4 .\u03c4 = gain(t o , z o ); ## Size constraint pruning If (\u03b6 \u2264 l) return (t o , z o ); ## Generate trees that size is l + 1. Foreach ( t \u2208 T l ) ## The bound of gain If ( u(t) < \u03c4 ) continue;## Generate trees of size l + 1 by rightmost ## extension of a tree t of size of l. T l+1 = T l+1 \u222a RME(t, S); end Foreach return weak-learner(T l+1 , S, W r ); end procedureFigure 2: A pseudo code of the training of a boosting algorithm for classifying trees.", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "num": null, |
| "content": "<table/>", |
| "html": null, |
| "text": "is the weight of correctly classified samples that have +1 as their labels, and W r\u27e8\u22121,<,z\u27e9 (t) is the weight of correctly classified samples that have -1 as their labels.W \u00ac r\u27e8y\u27e9 (t) is the sum of the weights of samples that a rule is not applied to (i.e. t \u2288 x i ) and y", |
| "type_str": "table" |
| }, |
| "TABREF1": { |
| "num": null, |
| "content": "<table><tr><td/><td/><td/><td colspan=\"3\">Mainichi Shimbun</td><td/><td/><td/></tr><tr><td>Category</td><td/><td>Training</td><td/><td/><td>Development</td><td/><td/><td>Test</td></tr><tr><td/><td>#P</td><td>#N</td><td>#W</td><td>#P</td><td>#N</td><td>#W</td><td>#P</td><td>#N</td><td>#W</td></tr><tr><td>business</td><td colspan=\"3\">4,782 18,790 67,452</td><td colspan=\"3\">597 2,348 29,023</td><td colspan=\"3\">597 2,348 29,372</td></tr><tr><td>entertainment</td><td colspan=\"3\">938 22,632 67,682</td><td colspan=\"3\">117 2,829 29,330</td><td colspan=\"3\">117 2,829 28,939</td></tr><tr><td>international</td><td colspan=\"3\">4,693 18,879 67,705</td><td colspan=\"3\">586 2,359 28,534</td><td colspan=\"3\">586 2,359 29,315</td></tr><tr><td>sports</td><td colspan=\"9\">12,687 10,884 67,592 1,586 1,360 28,658 1,585 1,360 29,024</td></tr><tr><td>technology</td><td colspan=\"3\">473 23,097 67,516</td><td colspan=\"3\">59 2,887 29,337</td><td colspan=\"3\">59 2,887 28,571</td></tr></table>", |
| "html": null, |
| "text": "Statistics of Mainichi Shimbun data set. #P, #N and #W relate to the number of positive samples, the number of negative samples, and the number of distinct words, respectively.", |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "num": null, |
| "content": "<table><tr><td/><td/><td/><td colspan=\"3\">Amazon review data</td><td/><td/><td/><td/></tr><tr><td>Category</td><td/><td>Training</td><td/><td/><td>Development</td><td/><td/><td>Test</td><td/></tr><tr><td/><td>#N</td><td>#P</td><td>#W</td><td>#N</td><td>#P</td><td>#W</td><td>#N</td><td>#P</td><td>#W</td></tr><tr><td>books</td><td colspan=\"9\">357,319 2,324,575 1,327,312 44,664 290,571 496,453 44,664 290,571 496,412</td></tr><tr><td>dvd</td><td>52,674</td><td>352,213</td><td>446,628</td><td>6,584</td><td colspan=\"2\">44,026 157,495</td><td>6,584</td><td colspan=\"2\">44,026 155,468</td></tr><tr><td>electronics</td><td>12,047</td><td>40,584</td><td>85,543</td><td>1,506</td><td>5,073</td><td>26,945</td><td>1,505</td><td>5,073</td><td>26,914</td></tr><tr><td>music</td><td>35,050</td><td>423,654</td><td>571,399</td><td>4,381</td><td colspan=\"2\">52,956 180,213</td><td>4,381</td><td colspan=\"2\">52,956 179,787</td></tr><tr><td>video</td><td>13,479</td><td>88,189</td><td>161,920</td><td>1,685</td><td>11,023</td><td>61,379</td><td>1,684</td><td>11,023</td><td>61,958</td></tr></table>", |
| "html": null, |
| "text": "Statistics of Amazon data set. #N, #P and #W relate to the number of negative reviews, the number of positive reviews, and the number of distinct words, respectively.", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "num": null, |
| "content": "<table><tr><td/><td colspan=\"2\">BOW</td><td/><td>Tree</td></tr><tr><td>Category</td><td>+FQ</td><td>-FQ</td><td>+FQ</td><td>-FQ</td></tr><tr><td>business</td><td>88.79</td><td colspan=\"3\">88.87 * 91.45 * 90.89</td></tr><tr><td>entertaiment</td><td colspan=\"2\">95.07 * 94.27</td><td colspan=\"2\">95.11 * 94.64</td></tr><tr><td>international</td><td>85.25</td><td colspan=\"2\">85.99 * 87.91</td><td>88.28 *</td></tr><tr><td>sports</td><td>98.17</td><td colspan=\"3\">98.52 * 98.70 * 98.64</td></tr><tr><td>technology</td><td colspan=\"2\">83.02 * 78.50</td><td>79.21</td><td>80.77 *</td></tr></table>", |
| "html": null, |
| "text": "Experimental Results of the training on the Mainichi Shinbun. Results in bold show the best accuracy, and while an underline means the accuracy of a boosting is better than the booting algorithm with the same feature representation (e.g. Tree(-FQ) for Tree(+FQ)) on each category.", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "num": null, |
| "content": "<table><tr><td/><td colspan=\"2\">BOW</td><td colspan=\"2\">N-gram</td></tr><tr><td>Category</td><td>+FQ</td><td>-FQ</td><td>+FQ</td><td>-FQ</td></tr><tr><td>books</td><td>74.35</td><td/><td/><td/></tr></table>", |
| "html": null, |
| "text": "Experimental Results of the training on the Amazon data set. The meaning of results in bold and each underline are the same asFigure 3.74.13 87.33 * 87.20 dvd 83.18 * 82.96 93.35 93.66 * electronics 89.39 * 89.06 93.36 93.57 * music 77.85 * 77.57 91.65 * 91.30 video 95.09 * 95.04 97.10", |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "num": null, |
| "content": "<table/>", |
| "html": null, |
| "text": "Janyce Wiebe. 2000. Learning subjective adjectives from corpora. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages 735-740. AAAI Press. Mohammed Javeed Zaki. 2002. Efficiently mining frequent trees in a forest. In Proc. of KDD'02, pages 71-80.", |
| "type_str": "table" |
| } |
| } |
| } |
| } |