ACL-OCL / Base_JSON /prefixR /json /R09 /R09-1031.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R09-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:00:39.272423Z"
},
"title": "Fast Boosting-based Part-of-Speech Tagging and Text Chunking with Efficient Rule Representation for Sequential Labeling",
"authors": [
{
"first": "Tomoya",
"middle": [],
"last": "Iwakura",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fujitsu Laboratories Ltd",
"location": {
"addrLine": "1-1, Kamikodanaka 4-chome, Nakahara-ku",
"postCode": "211-8588",
"settlement": "Kawasaki",
"country": "Japan"
}
},
"email": "iwakura.tomoya@jp.fujitsu.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes two techniques for fast sequential labeling such as part-of-speech (POS) tagging and text chunking. The first technique is a boosting-based algorithm that learns rules represented by combination of features. To avoid time-consuming evaluation of combination, we divide features into not used ones and used ones for learning combination. The other is a rule representation. Usual POS taggers and text chunkers decide the tag of each word by using the features generated from the word and its surrounding words. Thus similar rules, for example, that consist of the same set of words but only differ in locations from current words, are generated. We use a rule representation that enables us to merge such rules. We evaluate our methods with POS tagging and text chunking. The experimental results show that our methods show faster processing speed than taggers and chunkers without our methods while maintaining accuracy.",
"pdf_parse": {
"paper_id": "R09-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes two techniques for fast sequential labeling such as part-of-speech (POS) tagging and text chunking. The first technique is a boosting-based algorithm that learns rules represented by combination of features. To avoid time-consuming evaluation of combination, we divide features into not used ones and used ones for learning combination. The other is a rule representation. Usual POS taggers and text chunkers decide the tag of each word by using the features generated from the word and its surrounding words. Thus similar rules, for example, that consist of the same set of words but only differ in locations from current words, are generated. We use a rule representation that enables us to merge such rules. We evaluate our methods with POS tagging and text chunking. The experimental results show that our methods show faster processing speed than taggers and chunkers without our methods while maintaining accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Several machine learning algorithms such as Support Vector Machines (SVMs) and boosting-based learning algorithms have been applied to Natural Language Processing (NLP) problems successfully. The cases of boosting include text categorization [11] , POS tagging [5] and text chunking [7, 5] , and so on. Furthermore, parsers based on boosting-based learners have shown fast processing speed [7, 5] . However, to process large data such as WEB data and e-mails, processing speed of base technologies such as POS tagging and text chunking will be important.",
"cite_spans": [
{
"start": 242,
"end": 246,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 261,
"end": 264,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 283,
"end": 286,
"text": "[7,",
"ref_id": "BIBREF6"
},
{
"start": 287,
"end": 289,
"text": "5]",
"ref_id": "BIBREF4"
},
{
"start": 390,
"end": 393,
"text": "[7,",
"ref_id": "BIBREF6"
},
{
"start": 394,
"end": 396,
"text": "5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper proposes two techniques for improving processing speed of POS tagging and text chunking. The first technique is a boosting-based algorithm that learns rules. Instead of specifying combination of features manually, we specify features that are not used for the combination of features as atomic. Our boosting algorithm learns rules that consist of features or a feature from non-atomic features, and rules consisting of a feature from atomic features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The other is a rule representation for sequential labeling such as POS tagging and text chunking. Usual POS taggers and text chunkers decide the tag of each word by using features generated from the current word and its surrounding words. Thus each word and its attributes, such as character-types, are evaluated several times in different relative locations from current word. We propose a representation that enables us to merge similar rules that consist of the same set of words and attributes that only differ in positions from current word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The experimental results with English POS tagging and text chunking show the taggers and chunkers based on our methods show faster processing speed than without our methods while maintaining competitive accuracy. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "}, where each example x i \u2208 X consists of features in F, which we call a feature-set, and y i \u2208 Y is a class label. The goal is to induce following mapping from S:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "F : X \u2192 Y. Let |x i | (0 < |x i | \u2264 M )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "be the number of features included in a feature-set x i , which we call the size of x i , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "x i,j \u2208 F (1 \u2264 j \u2264 |x i | ) be a feature included in x i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We call a feature-set of size k as a k-feature-set. We call x i is a subset of x j , if a feature-set x j contains all the features in a feature-set x i . We denote subsets of feature-sets as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "x i \u2286 x j . Then we define weak hypothesis based on the idea of the real-valued predictions and abstaining [11] . Let f be a feature-set, called a rule, c be a real number, called a confidence value, and x be an input feature-set, then a weakhypothesis for feature-sets is defined as",
"cite_spans": [
{
"start": 107,
"end": 111,
"text": "[11]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "h f ,c (x) = c f \u2286 x 0 otherwise .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use a boosting-based algorithm that has shown fast training speed by treating a weak learner that learns several rules at each iteration [5] . The learner learns a final hypothesis F consisting of R types of rules defined as",
"cite_spans": [
{
"start": 140,
"end": 143,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "F (x) = sign( P R r=1 h fr ,cr (x)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "We use a learning algorithm that generates several rules from a given training samples S = {(x i , y i )} m i=1 and weights over samples {w r,1 , ..., w r,m } as weak learner. w r,i is the weight of sample number i after selecting r \u2212 1 types of rules, where 0<w r,i , 1 \u2264 i \u2264 m and 1 \u2264 r \u2264 R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "Given such input, the weak learner selects \u03bd types of rules with gain:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "gain(f ) def = | p W r,+1 (f ) \u2212 p W r,\u22121 (f )|,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "where f is a feature-set, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "Wr,y(f ) is Wr,y(f ) = P m i=1 w r,i [[f \u2286 x i \u2227 y i = y]],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "where [[\u03c0] ] is 1 if a proposition \u03c0 holds and 0 otherwise. The weak learner selects a feature-set having the highest gain as the r-th rule, and the weak learner selects \u03bd types of feature-sets having gain in top \u03bd as {f r , ..., f r+\u03bd\u22121 } at each iteration.",
"cite_spans": [
{
"start": 6,
"end": 10,
"text": "[[\u03c0]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "Then the boosting-based learner calculates the confidence value of each rule in the selected \u03bd rules and updates the weight of each sample. The confidence value c r for the first rule f r in the selected \u03bd rules is defined as ## F k : A set of k-feature-sets ## Ro : \u03bd optimal rules (feature-sets) ## R k,\u03c9 : \u03c9 k-feature-sets for generating candidates ## selectNBest(R, n, S, Wr): Select n best rules in R ## with gain on {w i,r } m i=1 and training samples S ## FN , FA : non-atomic, atomic features procedure weak-learner(F k ,S, Wr) ## \u03bd best feature-sets as rules",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "Ro = selectNBest( Ro \u222a F k , \u03bd, S, Wr); if (\u03b6 \u2264 k) return Ro; ## Size constraint ## \u03c9 best feature-sets in F k for generating candidates R k,\u03c9 = selectNBest(F k , \u03c9, S, Wr); \u03c4 = min f \u2208Ro gain(f ); ## The gain of \u03bd-th optimal rule Foreach ( f k \u2208 R k,\u03c9 ) ## Pruning candidates with upper bound of gain if ( u(f k ) < \u03c4 ) continue; Foreach (f \u2208 F N ) ## Generate candidates F k+1 = (F k+1 \u222a gen(f k , f ));",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "end Foreach end Foreach return weak-learner(F k+1 , S, Wr); ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "cr = 1 2 log( W r,+1 (fr )+\u03b5 W r,\u22121 (fr )+\u03b5 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "where \u03b5 is a value to avoid to happen that W r,+1 (f ) or W r,\u22121 (f ) is very small or even zero [10] . We set \u03b5 to 1.",
"cite_spans": [
{
"start": 97,
"end": 101,
"text": "[10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "After the calculation of c r for f r , the learner updates the weight of each sample with",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w r+1,i = w r,i exp(\u2212y i h fr ,cr (x i )).",
"eq_num": "(1)"
}
],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "Then the learner adds (f r , c r ) to F as the r-th rule and its confidence value. When we calculate the confidence value c r+1 for f r+1 , we use {w r+1,1 , ..., w r+1,m } as the weights of samples. After processing all the selected rules, the learner starts the next iteration. The learner continues training until obtaining R rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Boosting-based Rule Learning",
"sec_num": "2.2"
},
{
"text": "We extend a weak learner that learns several rules from a small portion of candidate rules called a bucket used in [5] . Figure 1 describes an overview of the weak learner.",
"cite_spans": [
{
"start": 115,
"end": 118,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 121,
"end": 129,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "At each iteration, one of the |B| types of buckets is given as an initial 1-feature-sets F 1 to the weak learner. We use W-dist that is a method to distributes features to |B|-buckets. To distribute features to buckets, W-dist calculates the weight of each feature that is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "Wr(f ) = P m i=1 w r,i [[{f } \u2286 x i ]] (f \u2208 F).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "Then W-dist sorts features based on the weight of each feature, and insert each feature to one of the buckets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "The weak learner finds \u03bd best feature-sets as rules from feature-sets that include one of the features in F 1 . The weak learner generates candidate k-feature-sets (1 < k) from \u03c9 best (k-1)-feature-sets in F k\u22121 with gain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "We define two types of features, FA and FN (i.e F = FA \u222a FN ). FA and FN are a set of atomic features and a set of non-atomic features. When we generate candidate rules that consist of more than a feature, we only use nonatomic features in FN .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "For example, if we use features FA = {A, B, C} and FN = {a, b, c}, we examine followings as candidates; {A},{B},{C},{a},{b}, {c}, {a, b}, {b, c} and {a, b, c}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "The gen is a function to generate combination of features. We denote f = f + f as the generation of k + 1feature-set f that consists of a feature f and a k-feature-set f . Let ID(f ) be the integer corresponding to f , called id, and \u03c6 be 0-feature-set. Then the gen is defined as follows. The gen excludes the generation of candidates that include an atomic feature. We assign smaller integer to more infrequent features as id. If there are features having the same frequency, we assign id to each feature with lexicographic order of features as in [4] .",
"cite_spans": [
{
"start": 550,
"end": 553,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "gen(f , f ) = 8 > < > : \u03c6 if ( f \u2286 FA ) f + f if ID(f ) > max f \u2208f ID(f ) \u03c6 otherwise . ## S = {(x i , y i )} m i=1 : x i \u2286X , y i \u2208 {+1} ## Wr = {w r,i } m i=1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "We also use the following pruning techniques. \u2022 Size constraint (\u03b6): We examine candidates whose size is no greater than a threshold \u03b6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "\u2022 Upper bound of gain: The upper bound is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "u(f ) def = max( p W r,+1 (f ), p W r,\u22121 (f )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "For any feature-set f \u2286F, which contains f (i.e. f \u2286 f ), the gain(f ) is bounded under u(f ), since 0 \u2264 Wr,y(f ) \u2264 Wr,y(f ) for y \u2208 {\u00b11}. Thus if u(f ) is less than \u03c4 , the gain of the current optimal rule, candidates that contain f are safely pruned. Figure 2 describes an overview of our algorithm, which we call AdaBoost for a weak learner learning Several rules from Distributed Features consist of Atomic and Nonatomic (AdaBoost.SDFAN, for short). 1 ",
"cite_spans": [
{
"start": 454,
"end": 455,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 253,
"end": 261,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Learning Rules",
"sec_num": "2.3"
},
{
"text": "When identifying the POS tags of words and chunks of words in usual parsers, we firstly generate features from current word and its surrounding words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Problem of Conventional Methods",
"sec_num": "3.1"
},
{
"text": "Let \"I am happy .\" be a sequence of words. If we identify a tag of \"am\" with 3-word window, we use \"I\", \"am\" and \"happy\" as features. To distinguish words that appear different locations, we usually express words with relative locations from current word like \"I:-1\", \"am:0\" and \"happy:1\", where the -1, 0 and 1 after \":\" are locationmarkers for relative locations. When \"happy\" is a current word, we have to express \"am\" as \"am:-1\". Thus similar rules that differ in relative locations are generated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Problem of Conventional Methods",
"sec_num": "3.1"
},
{
"text": "We propose a rule representation, called Compressed Sequential Labeling Rule Representation (CSLR-rep, for 1 To reflect imbalance class distribution, we use the default rule defined as 1 2 log( and {I:0, am:1}, P RP, c 2 be WHs generated by AdaBoost.SDFAN, and JJ, V BP and P RP be class tags.",
"cite_spans": [
{
"start": 107,
"end": 108,
"text": "1",
"ref_id": "BIBREF0"
},
{
"start": 185,
"end": 186,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Rule Representation",
"sec_num": "3.2"
},
{
"text": "W +1 W \u22121 ), where Wy = P m i=1 [[y i = y]] for y \u2208 {\u00b11}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Rule Representation",
"sec_num": "3.2"
},
{
"text": "These WHs are converted to the following CSLR-rep-based rule;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Rule Representation",
"sec_num": "3.2"
},
{
"text": "{I:0, am:1}, {(2, JJ, c 0 ), (1, V BP, c 1 ), (0, P RP, c 2 )} ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Rule Representation",
"sec_num": "3.2"
},
{
"text": "When the converted WH in the example is applied to a word sequence \"I am happy .\", we can assign scores to all the three words by just checking {I:0, am:1}. The scores for \"JJ\", \"VBP\" and \"PRP\" are assigned to \"happy\", \"am\" and \"I\", respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Rule Representation",
"sec_num": "3.2"
},
{
"text": "When we use the three original WHs in the example, we have to check three rules to assign scores to the words. Figure 3 shows an overview for the rule conversion. We assume each feature is divided into a location-marker and a feature-stem. A location-marker is the relative location from a current word. A feature-stem is a word or one of its attributes such as character-types without a locationmarker.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 119,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Efficient Rule Representation",
"sec_num": "3.2"
},
{
"text": "We use the relative location of a feature appeared in leftmost word in each rule as base-position (bp, for short). Then we convert each feature to a new feature that consists of its feature-stem and new location-marker. The new location-marker means a relative location from the bp. We add the value of (bp \u00d7 -1) as the scoring-position of the current score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Rule Representation",
"sec_num": "3.2"
},
{
"text": "We describe an overview of the application of rules represented by CSLR-rep. We consider two types of features, static-features and dynamic-features, in this application. Static-features are generated from input word sequences. Dynamic-features are dynamically generated from the tag of each word assigned with the highest score. We define W as a word window size that means using a current word and its surrounding words appearing W \u22121 2 left and W \u22121 2 right of the current word. Figure 4 shows an overview of the application. Let {wd 1 , .., wd N } be an input that consists of N (1 \u2264 N ) words. Each word wd i (1 \u2264 i \u2264 N ) has |wd i | types of attributes. We denote j-th attribute of wd i as wd i,j . RC is a set of rules represented by CSLR-rep and RC[rc] is the set of scoring-position, class, score of rc.",
"cite_spans": [],
"ref_spans": [
{
"start": 482,
"end": 490,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Rule Application",
"sec_num": "3.3"
},
{
"text": "The application has two stages for static-features and dynamic-features. Our algorithm firstly assigns scores with rules consisting of only Static-features to each word in the direction of beginning of sentence (BOS) to end of sentence (EOS) direction. Rs[i] keeps the status of rule applications for i-th word. If the algorithm finds a subset of rules while applying rules from i-th word, the algorithm adds the subset of rules to Rs[i]. 2 We define subsets of rules as follows: Definition 1 Subsets of rules If there exists rule in rule, scores \u2208 RC that satisfies rc \u2286 rule \u2227 rc = rule, we call rc is a subset of rules of RC and denote it as rc \u2282 RC",
"cite_spans": [
{
"start": 439,
"end": 440,
"text": "2",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Application",
"sec_num": "3.3"
},
{
"text": "Then we apply rules that include dynamic-features. All the subsets of rules are kept in Rs after examining all the Static-features, we can assign scores to words by just checking dynamic-feature of each word with Rs. When checking rules that include the dynamic-feature of i-th word we check subsets of rules of words in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Application",
"sec_num": "3.3"
},
{
"text": "(i \u2212 W \u22121 2 \u2212 \u2206 ) to (i + max( W \u22121 2 , \u2206) -1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Application",
"sec_num": "3.3"
},
{
"text": ". We use the tags of words with in \u2206 in the direction of EOS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Application",
"sec_num": "3.3"
},
{
"text": "We describe an example. Let RC ={ {I:0, am:1}, {I:0, VBP:1}, {I:0, VBP:1, JJ:2} } be a set of rules. When applying the rules to \"I am happy .\" with (W, \u2206) = (3, 2), we check \"I:0\" first. \"I:0\" is inserted to Rs[1] because of {I:0} \u2282 RC. Then we check \"am:1\" with \"{I:0}\" in Rs [1] , and {I:0, am:1} is found. Finally we check \"happy:2\" with Rs [1] . We check the other words like this. After checking all the words from BOS to EOS direction, we start to check rules that include dynamic-features from EOS to BOS direction. If the dynamic-features of \"am\" and \"happy\" are VBP and JJ, we check VBP and JJ with Rs. For example, VBP is treated as \"VBP:1\" from the position of \"I\" and \"VBP:0\" from the position of \"am\". When we check \"VBP:1\" with \"{I:0}\" in Rs [1] , {I:0, VBP:1} is found and inserted to Rs [1] . Then we check \"JJ:2\" with \"I:0\" and {I:0, VBP:1} in Rs [1] . Then we check these dynamicfeatures with Rs [2] .",
"cite_spans": [
{
"start": 277,
"end": 280,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 344,
"end": 347,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 756,
"end": 759,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 803,
"end": 806,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 864,
"end": 867,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 914,
"end": 917,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Application",
"sec_num": "3.3"
},
{
"text": "Unfortunately, the CSLR-rep has some drawbacks. One of the drawbacks is the increase of dynamic-features. When we convert rules that consist of more than a feature to CSLR-rep, the number of types of dynamic-features increases. Since original rule representation only handles dynamic-features within \u2206, the total number of types of dynamic-features is up to \"\u2206 \u00d7 CL\", where CL is the number of classes in each task. However, the total number of dynamic-features in CSLR-rep is up to \" ( W \u22121 2 + \u2206 + max( W \u22121 2 , \u2206) -1) \u00d7 CL \" because we express each feature with the relative location from the base-position of each rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Application",
"sec_num": "3.3"
},
{
"text": "We used the Penn Wall Street Journal treebank [8] . We split the treebank into training (sections 0-18), development (sections 19-21) and test (sections 22-24) as in [5] . We used the following features: \u2022 words, words that are turned into all capitalized, in a Wword window size, tags assigned to \u2206 words on the right.",
"cite_spans": [
{
"start": 46,
"end": 49,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 166,
"end": 169,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "POS tagging and Text Chunking 4.1 English POS Tagging",
"sec_num": "4"
},
{
"text": "\u2022 whether the current word has a hyphen, a number, a capital letter, the current word is all capital, all small \u2022 prefixes and suffixes of current word (up to 4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tagging and Text Chunking 4.1 English POS Tagging",
"sec_num": "4"
},
{
"text": "\u2022 candidate-tags of words in a W -word window We collect candidate POS tags of each word, called candidate feature, from the automatically tagged corpus provided for the shared task of English Named Entity recognition in CoNLL 2003 as in [5] . 3 4 We express these candidates with one of the following ranges decided by their frequency f q: 10 \u2264 f q < 100, 100 \u2264 f q < 1000 and 1000 \u2264 f q.",
"cite_spans": [
{
"start": 238,
"end": 241,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 244,
"end": 247,
"text": "3 4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "POS tagging and Text Chunking 4.1 English POS Tagging",
"sec_num": "4"
},
{
"text": "If 'work' is annotated as NN 2000 times, we express it like \"1000\u2264NN\". If 'work' is current word, we add 1000\u2264NN as a candidate POS tag feature of the current word. If 'work' appears the next of the current word, we add 1000\u2264NN as a candidate POS tag of the next word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tagging and Text Chunking 4.1 English POS Tagging",
"sec_num": "4"
},
{
"text": "We used the data prepared for CoNLL-2000 shared tasks. 5 This task aims to identify 10 types of chunks, such as, NP, VP and PP, and so on. The data consists of subsets of Penn Wall Street Journal treebank: training (sections 15-18) and test (section 20). We prepared the development set from section 21 of the treebank as in [5] . 6 Each base phrase consists of one word or more. To identify word chunks, we use IOE2 representation. The chunks are represented by the following tags: E-X is used for end word of a chunk of class X. I-X is used for non-end word in an X chunk. O is used for word outside of any chunk. We used the following features: \u2022 words and POS tags in a W -word window.",
"cite_spans": [
{
"start": 55,
"end": 56,
"text": "5",
"ref_id": "BIBREF4"
},
{
"start": 325,
"end": 328,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 331,
"end": 332,
"text": "6",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Chunking",
"sec_num": "4.2"
},
{
"text": "\u2022 tags assigned to \u2206 words on the right.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Chunking",
"sec_num": "4.2"
},
{
"text": "\u2022 candidate-tags of words in a W -word window. We collected the followings as candidate-tags for chunking from the same corpus used in POS tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Chunking",
"sec_num": "4.2"
},
{
"text": "\u2022 Candidate-tags expressed with frequency information as in POS tagging \u2022 The ranking of each candidate decided by frequencies in the automatically tagged data \u2022 Candidate tags of each word If we collect \"work\" annotated as I-NP 2000 times and as E-VP 100 times, we generate the following candidate-tags for \"work\"; 1000\u2264I-NP, 100\u2264E-VP<1000, rank:I-NP=1 rank:E-NP=2, candidate=I-NP and candidate=E-VP. 7 ",
"cite_spans": [
{
"start": 402,
"end": 403,
"text": "7",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Chunking",
"sec_num": "4.2"
},
{
"text": "We tested R=200,000, |B|=1,000, \u03bd = 10, \u03c9=10, \u03b6={1,2,3} and (W, \u2206)={(3,1), (5,2), (7,3)}. Table 1 shows that the number of training samples, classes, features.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We examine two types of training, \"-Atomic \" and \" +Atomic \", in this experiment. \"-Atomic \" indicates training with all the features as non-atomic. \" +Atomic \" indicates training by using atomic features. We specify prefixes, suffixes and candidate-tags as atomic for POS tagging, and candidate-tags as atomic for text chunking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "To extend AdaBoost.SDFAN to handle multi-class problems, we used the one-vs-the-rest method. To identify proper tag sequences, we use Viterbi search. 8 Table 2 shows accuracy obtained with each rules on POS tagging and text chunking. We calculate label accuracy for 7 We converted the chunk representation in the corpus to IOE2 and we collected chunk tags of each word appearing more than 9 times. 8 We map the confidence value of each classifier into the range of 0 to 1 with sigmoid function defined as s(X) = 1/(1+exp(\u2212\u03b2X)), where X = F (x) is a output of a classifier. We used \u03b2=5 in this experiment. We select a tag sequence which maximizes the sum of those log values by Viterbi search. POS tagging Guided learning [12] 97.33 Boosting [5] 97.32 CRF [13] 97.40 This paper 97.34",
"cite_spans": [
{
"start": 150,
"end": 151,
"text": "8",
"ref_id": "BIBREF7"
},
{
"start": 266,
"end": 267,
"text": "7",
"ref_id": "BIBREF6"
},
{
"start": 398,
"end": 399,
"text": "8",
"ref_id": "BIBREF7"
},
{
"start": 721,
"end": 725,
"text": "[12]",
"ref_id": "BIBREF11"
},
{
"start": 741,
"end": 744,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 755,
"end": 759,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 152,
"end": 159,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Text Chunking LaSo [2] 94.4 Boosting [5] 94.30 CRF [13] 95.15 This paper 94.34",
"cite_spans": [
{
"start": 19,
"end": 22,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 37,
"end": 40,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 51,
"end": 55,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging and Chunking Accuracy",
"sec_num": "5.1"
},
{
"text": "As for fast classification methods, techniques for converting or pruning models or rules generated by machine learning algorithms are proposed. Model conversion techniques for SVMs with polynomial kernel that converts kernel-based classifier into a simple liner classifier are proposed in [3, 6] . For AdaBoost, a pruning method for hypotheses is proposed in [9] .",
"cite_spans": [
{
"start": 289,
"end": 292,
"text": "[3,",
"ref_id": "BIBREF2"
},
{
"start": 293,
"end": 295,
"text": "6]",
"ref_id": "BIBREF5"
},
{
"start": 359,
"end": 362,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging and Chunking Accuracy",
"sec_num": "5.1"
},
{
"text": "Our method uses a rule conversion technique for sequential labeling problems. Although CSLR-rep can only be used in tasks that use each word as different features time and again, such as POS tagging and text, we obtain faster processing speed without loss in accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging and Chunking Accuracy",
"sec_num": "5.1"
},
{
"text": "We have proposed techniques for fast boosting-based POS tagging and text chunking. To reduce time-consuming rule evaluation, our method controls the generation of combination of features by specifying part of features that are not used for combination. We have also proposed a rule representation that enables us to merge similar rules. Experimental results have showed our techniques improve classification speed while maintaining accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "We use a TRIE structure called double array for representing rules[1]. To keep the statuses of rule applications, we store the last position in a TRIE where each subset of rules reached.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cnts.ua.ac.be/conll2003 /ner/4 We collected POS tags for each word that are annotated to the word more than 9 times in the corpus as candidates.5 http://lcg-www.uia.ac.be/conll2000/chunking/6 We used http://ilk.uvt.nl/\u02dcsabine/chunklink/chunklink 2-2-2000 for conll.pl for creating development data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used a machine with 3.6GHz DualCore Intel Xeon and 10 GB memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We obtain almost the same accuracy even if we use part of features as atomic. Table 3 shows tagging and chunking speed. We measure the number of words processed by per second. 9 We obtain faster processing speed by using CSLR-rep-based rules traind with \u03b6 = {1, 2} and -Atomic. These show that CSLR-rep contributes to improved processing time. When we use rules trained with \u03b6 = 1, we can get more improvement than using rules trained with \u03b6 = 2.",
"cite_spans": [
{
"start": 176,
"end": 177,
"text": "9",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "However, the performance obtained with CSLR-repbased rules trained with (\u03b6 = 3, \u2212Atomic) is slower than with the original rules. We guess this is caused due to the following two reasons. Our CSLR-rep reduces the number of times of rule evaluation up to 1/W . Thus CSLR-rep reduces processing time linearly. However, the number of combination of features exponentially increases. The other reason is that the number of times to generate dynamicfeatures is increased as described in the end of section 3.3.We obtain much improvement by using atomic features with CSLR-rep. For example, processing speed obtained with the text chunker using rules (\u03b6 = 3, W = 7, +Atomic) is about 28 times faster than the speed obtained with the chunker using rules (\u03b6 = 3, W = 7, -Atomic ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging and Chunking Speed",
"sec_num": "5.2"
},
{
"text": "We list previous best results on English POS tagging and Text chunking in Table 4 . The tagger and chunker based on AdaBoost.SDFAN show competitive F-measure with previous best results.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An efficient digital search algorithm by using a double-array structure",
"authors": [
{
"first": "J",
"middle": [],
"last": "Aoe",
"suffix": ""
}
],
"year": 1989,
"venue": "IEEE Transactions on Software Engineering",
"volume": "15",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Aoe. An efficient digital search algorithm by using a double-array structure. In IEEE Transactions on Software Engineering, volume 15(9), 1989.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning as search optimization: approximate large margin methods for structured prediction",
"authors": [
{
"first": "H",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ICML 2005",
"volume": "",
"issue": "",
"pages": "169--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Daum\u00e9 III and D. Marcu. Learning as search optimization: approximate large margin methods for structured prediction. In Proc. of ICML 2005, pages 169-176, 2005.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Efficient Support Vector classifiers for named entity recognition",
"authors": [
{
"first": "H",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Kazawa",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of COLING 2002",
"volume": "",
"issue": "",
"pages": "390--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Isozaki and H. Kazawa. Efficient Support Vector classifiers for named entity recognition. In Proc. of COLING 2002, pages 390-396, 2002.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Fast training methods of boosting algorithms for text analysis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Iwakura",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Okamoto",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of RANLP",
"volume": "",
"issue": "",
"pages": "274--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Iwakura and S. Okamoto. Fast training methods of boosting algorithms for text analysis. In Proc. of RANLP 2007, pages 274-279, 2007.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A fast boosting-based learner for feature-rich tagging and chunking",
"authors": [
{
"first": "T",
"middle": [],
"last": "Iwakura",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Okamoto",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Iwakura and S. Okamoto. A fast boosting-based learner for feature-rich tagging and chunking. In Proc. of CoNLL 2008, pages 17-24, 2008.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fast methods for kernel-based text analysis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ACL-03",
"volume": "",
"issue": "",
"pages": "24--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Kudo and Y. Matsumoto. Fast methods for kernel-based text analysis. In Proc. of ACL-03, pages 24-31, 2003.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Boosting-based parse reranking with subtree features",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isozaki",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL 2005",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Kudo, J. Suzuki, and H. Isozaki. Boosting-based parse reranking with sub- tree features. In Proc. of ACL 2005, pages 189-196, 2005.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Building a large annotated corpus of english: The Penn Treebank",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. Building a large anno- tated corpus of english: The Penn Treebank. pages 313-330, 1994.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Pruning adaptive boosting",
"authors": [
{
"first": "D",
"middle": [
"D"
],
"last": "Margineantu",
"suffix": ""
},
{
"first": "T",
"middle": [
"G"
],
"last": "Dietterich",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of ICML 1997",
"volume": "",
"issue": "",
"pages": "211--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. D. Margineantu and T. G. Dietterich. Pruning adaptive boosting. In Proc. of ICML 1997, pages 211-218, 1997.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improved boosting algorithms using confidencerated predictions",
"authors": [
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning",
"volume": "37",
"issue": "",
"pages": "297--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence- rated predictions. Machine Learning, 37(3):297-336, 1999.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Boostexter: A boosting-based system for text categorization",
"authors": [
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2000,
"venue": "Machine Learning",
"volume": "39",
"issue": "",
"pages": "135--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. E. Schapire and Y. Singer. Boostexter: A boosting-based system for text categorization. Machine Learning, 39(2/3):135-168, 2000.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Guided learning for bidirectional sequence classification",
"authors": [
{
"first": "L",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Satta",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "760--767",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Shen, G. Satta, and A. Joshi. Guided learning for bidirectional sequence classification. In Proc. of ACL 2007, pages 760-767, 2007.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semi-supervised sequential labeling and segmentation using giga-word scale unlabeled data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isozaki",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "665--673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Suzuki and H. Isozaki. Semi-supervised sequential labeling and segmenta- tion using giga-word scale unlabeled data. In Proc. of ACL-08: HLT, pages 665-673, June 2008.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Let X be the set of examples and Y be a set of labels {\u22121, +1}. Let F = {f 1 , f 2 , ..., f M } be M types of features represented by strings. Let S",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Find rules with given weights.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Weights of samples after learning ## r types of rules. ## |B| : The size of bucket B = {B[0], ..., B[|B| \u2212 1]} ## b, r : The current bucket and rule number ## distFT: distribute features to buckets procedure AdaBoost.SDFAN() B = distFT(S, |B|); ## Distributing features into B ## Initialize values and weights: r = 1; b = 0; c 0 = 1 2 log( W +1 W \u22121 ); For i = 1,...,m: w 1,i = exp(c 0 ); While (r \u2264 R) ## Learning R types of rules ##Select \u03bd rules and increment bucket id b R = weak-learner(B[b], S, Wr); b++; Foreach (f \u2208 R) ##Update weights with each rule c = 1 2 log( W r,+1 (f )+1 W r,\u22121 (f )+1 ); For i=1,..,m w r+1,i = w r,i exp(\u2212y i h f ,c (x i )); fr = f ; cr = c; r++; end Foreach if (b == |B|) ## Redistribution of features B = distFT(S, |B|); b=0; end if end While return F (x) = sign(c 0 + P R r=1 h fr ,cr (x)) An overview of AdaBoost.SDFAN.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "# f : a rule generated by AdaBoost.SDFAN ## sc : the score of f ## cl : the class of f ## s(f ): the feature-stem of a feature f ## p(f ): the location-marker of a feature f ## fn: the conversion result of f ## RC[fn]: scores for fn procedure ruleConv( f , sc, cl) bp = min f \u2208f p(f ) ## select the base position Foreach f \u2208 f ## generate new rule lm = p(f ) \u2212 bp ## new location-marker of f ## append new representation of f fn = fn + \"s(f ):lm\" endForeach RC[fn] = RC[fn] \u222a (\u2212bp, cl, sc) Generating CSLR-rep based rules. short), to merge similar rules. To use CSLR-rep, we convert weak-hypotheses (WHs, for short) generated by Ad-aBoost.SDFAN to CSLR-rep. A CSLR-rep-based WH is represented as rule, {(p 1 , cl 1 , c 1 ), ..., (p q , cl q , c q )} . The rule is a rule generated by merging rules learned by AdaBoost.SDFAN. p p , called scoring-position, denotes the position of a word to assign a score c p of a class cl p (1 \u2264 p \u2264 q) from current word. We describe an example. Let {I:\u22122, am:\u22121}, JJ, c 0 , {I: \u2212 1, am:0}, V BP, c 1",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "# RC[rc]: pairs of score-positions and scores of rc ## Rs[i]: subset of rules of i-th word ##Initial value for each word is 0-feature-setprocedure ruleApplication( {wd 1 , .., wd N }, FN ) ## For Static-feature For i = 1; i \u2264 N ; i ++ # beginning position For i = i ; i < i + W ; i++ # combination position For j = 1; j \u2264 |wd i |; j++# attributes Foreach rc \u2208 Rs[i ] lm = i \u2212 i ##current location-marker rc = rc + \"wd i,j :lm\" # If RC[rc ] is applied, # assign the scores with base position i' assignScores(RC[rc ], i ) If rc \u2282 RC Rs[i ] = Rs[i ] \u222a rc endForeach # If no subset of rules for i , go to i + 1-th word If Rs[i ] = {\u03c6} break endFor endFor ## For Dynamic-feature : EOS to BOS direction For i = N ; 1 \u2264 i ; i \u2212\u2212 # beginning position # Checking rules including Dynamic-feature db = i \u2212 W \u22121 \u2206; de = i + max( W \u22121 2 , \u2206); For i = db; i < de; i++ Foreach rc \u2208 Rs[i] lm = j \u2212 i ## current location-marker rc = rc + \"df t i :lm \" # df t j is the tag of i'-th word assignScores(RC[rc ], i) If rc \u2282 RC Rs[i] = Rs[i] \u222a rc endForeach endFor endFor Application of CSLR-rep based rules.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "For instance, \"[He] (NP) [reckons] (VP) [the current account deficit] (NP)...\" is represented by IOE2 as follows; \"He/E-NP reckons/E-VP the/I-NP current/I-NP account/I-NP deficit/E-NP\".",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Training data for experiments. POS and ETC indicate POS tagging and text chunking. of S, of cl and M indicate the number samples, the number of class in each data set and the distinct number of feature types for each pair of (W, \u2206).",
"content": "<table><tr><td>M (W, \u2206)</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF1": {
"text": "Accuracy on Test Data.",
"content": "<table><tr><td/><td/><td colspan=\"2\">POS tagging</td><td/><td/></tr><tr><td/><td/><td/><td>-Atomic</td><td/><td>+Atomic</td></tr><tr><td>(W, \u2206)/ \u03b6</td><td>1</td><td>2</td><td>3</td><td>2</td><td>3</td></tr><tr><td>(3,1)</td><td>96.81</td><td colspan=\"2\">97.09 97.05</td><td colspan=\"2\">97.00 97.04</td></tr><tr><td>(5,2)</td><td>96.96</td><td colspan=\"2\">97.30 97.30</td><td colspan=\"2\">97.25 97.28</td></tr><tr><td>(7,3)</td><td>96.99</td><td colspan=\"2\">97.36 97.30</td><td colspan=\"2\">97.31 97.34</td></tr><tr><td/><td/><td colspan=\"2\">text chunking</td><td/><td/></tr><tr><td/><td/><td/><td>-Atomic</td><td/><td>+Atomic</td></tr><tr><td>(W, \u2206)/ \u03b6</td><td>1</td><td>2</td><td>3</td><td>2</td><td>3</td></tr><tr><td>(3,1)</td><td>92.40</td><td colspan=\"2\">93.87 93.69</td><td colspan=\"2\">93.91 93.82</td></tr><tr><td>(5,2)</td><td>92.87</td><td colspan=\"2\">94.31 94.14</td><td colspan=\"2\">94.34 94.31</td></tr><tr><td>(7,3)</td><td>93.09</td><td colspan=\"2\">94.32 94.11</td><td colspan=\"2\">94.12 94.11</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": "Comparison with previous best results.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}