| { |
| "paper_id": "P97-1030", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:15:27.274322Z" |
| }, |
| "title": "Mistake-Driven Mixture of Hierarchical Tag Context Trees", |
| "authors": [ |
| { |
| "first": "Masahiko", |
| "middle": [], |
| "last": "Haruno", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "NTT Communication Science Laboratories", |
| "location": { |
| "addrLine": "1-1 Hikari-No-Oka Yokosuka-Shi Kanagawa 239", |
| "country": "Japan" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper proposes a mistake-driven mixture method for learning a tag model. The method iteratively performs two procedures: 1. constructing a tag model based on the current data distribution and 2. updating the distribution by focusing on data that are not well predicted by the constructed model. The final tag model is constructed by mixing all the models according to their performance. To well reflect the data distribution, we represent each tag model as a hierarchical tag (i.e.,NTT 1 < proper noun < noun) context tree. By using the hierarchical tag context tree, the constituents of sequential tag models gradually change from broad coverage tags (e.g.,noun) to specific exceptional words that cannot be captured by generM tags. In other words, the method incorporates not only frequent connections but also infrequent ones that are often considered to be collocationah We evaluate several tag models by implementing Japanese part-of-speech taggers that share all other conditions (i.e.,dictionary and word model) other than their tag models. The experimental results show the proposed method significantly outperforms both hand-crafted and conventional statistical methods.", |
| "pdf_parse": { |
| "paper_id": "P97-1030", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper proposes a mistake-driven mixture method for learning a tag model. The method iteratively performs two procedures: 1. constructing a tag model based on the current data distribution and 2. updating the distribution by focusing on data that are not well predicted by the constructed model. The final tag model is constructed by mixing all the models according to their performance. To well reflect the data distribution, we represent each tag model as a hierarchical tag (i.e.,NTT 1 < proper noun < noun) context tree. By using the hierarchical tag context tree, the constituents of sequential tag models gradually change from broad coverage tags (e.g.,noun) to specific exceptional words that cannot be captured by generM tags. In other words, the method incorporates not only frequent connections but also infrequent ones that are often considered to be collocationah We evaluate several tag models by implementing Japanese part-of-speech taggers that share all other conditions (i.e.,dictionary and word model) other than their tag models. The experimental results show the proposed method significantly outperforms both hand-crafted and conventional statistical methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The last few years have seen the great success of stochastic part-of-speech (POS) taggers (Church, 1988 : Kupiec, 1992 Charniak et M., 1993; Brill, 1992; Nagata, 1994) . The stochastic approach generally attains 94 to 96% accuracy and replaces the labor-intensive compilation of linguistics rules by using an automated learning algorithm. However,", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 103, |
| "text": "(Church, 1988", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 104, |
| "end": 118, |
| "text": ": Kupiec, 1992", |
| "ref_id": null |
| }, |
| { |
| "start": 119, |
| "end": 140, |
| "text": "Charniak et M., 1993;", |
| "ref_id": null |
| }, |
| { |
| "start": 141, |
| "end": 153, |
| "text": "Brill, 1992;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 154, |
| "end": 167, |
| "text": "Nagata, 1994)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1NTT is an abbreviation of Nippon Telegraph and Telephone Corporation. practical systems require more accuracy because POS tagging is an inevitable pre-processing step for all practical systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To derive a new stochastic tagger, we have two options since stochastic taggers generally comprise two components: word model and tag model. The word model is a set of probabilities that a word occurs with a tag (part-of-speech) when given the preceding words and their tags in a sentence. On the contrary, the tag model is a set of probabilities that a tag appears after the preceding words and their tags.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The first option is to construct more sophisticated word models. (Charniak et al., 1993) reports that their model considers the roots and suffixes of words to greatly improve tagging accuracy for English corpora. However, the word model approach has the following shortcomings:", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 88, |
| "text": "(Charniak et al., 1993)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 For agglutinative languages such as Japanese and Chinese, the simple Bayes transfer rule is inapplicable because the word length of a sentence is not fixed in all possible segmentations -~. We can only use simpler word models in these languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Sophisticated word models largely depend on the target language. It is time-consuming to compile fine-grained word models for each language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The second option is to devise a new tag model. (Sch~tze and Singer. 1994) have introduced a variable-memory-length tag model. Unlike conventional bi-gram and tri-gram models, the method selects the optimal length by using the context tree (Rissanen, 1983) which was originally introduced for use in data compression (Cover and Thomas, 1991) .", |
| "cite_spans": [ |
| { |
| "start": 240, |
| "end": 256, |
| "text": "(Rissanen, 1983)", |
| "ref_id": null |
| }, |
| { |
| "start": 317, |
| "end": 341, |
| "text": "(Cover and Thomas, 1991)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Although the variable-memory length approach remarkably reduces the number of parameters, tagging accuracy is only as good as conventional methods. Why didn't the method have higher accuracy ? The crucial problem for current P(,,,)P(,,lu,,) P(wi) cannot be consid-2In P(w,]t,) = P (t,) '", |
| "cite_spans": [ |
| { |
| "start": 281, |
| "end": 285, |
| "text": "(t,)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "ered to be identical for ~ll segmentations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "tag models is the set of collocational sequences of words that cannot be captured by just their tags. Because the maximal likelihood estimator (MLE) emphasizes the most frequent connections, an exceptional connection is placed in the same class as a frequent connection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To tackle this problem, we introduce a new tag model based on the mistake-driven mixture of hierarchical tag context trees. Compared to Schiitze and Singer's context tree (Schiitze and Singer, 1994) , the hierarchical tag context tree is extended in that the context is represented by a hierarchical tag set (i.e.,NTT < proper noun < noun). This is extremely useful in capturing exceptional connections that can be detected only at the word level.", |
| "cite_spans": [ |
| { |
| "start": 171, |
| "end": 198, |
| "text": "(Schiitze and Singer, 1994)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To make the best use of the hierarchical context tree, the mistake-driven mixture method imitates the process in which linguists incorporate exceptional connections into hand-crafted rules: They first construct coarse rules which seems to cover broad range of data. They then try to analyze data by using the rules and extract exceptions that the rules cannot handle. Next they generalize the exceptions and refine the previous rules. The following two steps abstract the human algorithm for incorporating exceptional connections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1. construct temporary rules which seem to well generalize given data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "2. try to analyze data by using the constructed rules and extract the exceptions that cannot be correctly handled, then return to the first step and focus on the exceptions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To put the above idea into our learning algorithm, The mistake-driven mixture method attaches a weight vector to each example and iteratively performs the following two procedures in the training phase:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1. constructing a context tree based on the current data distribution (weight vector) 2. updating the distribution (weight vector) by focusing on data not well predicted by the constructed tree. More precisely, the algorithm reduces the weight of examples that are correctly handled.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For the prediction phase, it then outputs a final tag model by mixing all the constructed models according to their performance. By using the hierarchical tag context tree, the constituents of a series of tag models gradually change from broad coverage tags (e.g.,noun) to specific exceptional words that cannot be captured by general tags, In other words, the method incorporates not only frequent connections but also infrequent ones that are often considered to be exceptional.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The construction of the paper is as follows. Section 2 describes the stochastic POS tagging scheme and hierarchical tag setting. Section 3 presents a new probability estimator that uses a hierarchical tag context tree and Section 4 explains the mistakedriven mixture method. Section 5 reports a preliminary evaluation using Japanese newspaper articles. We tested several tag models by keeping all other conditions (i.e., dictionary and word model) identical. The experimental results show that the proposed method significantly outperforms both handcrafted and conventional statistical methods. Section 6 concerns related works and Sections 7 concludes the paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section, we will briefly review the basic equations for part-of-speech tagging and introduce hierarchical-tag setting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Equation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The tagging problem is formally defined as finding a sequence of tags tl,, that maximize the probability of input string L. P (wl,.,tl,~,L) argmaxt. P (Wl,n,tl,nlL) ", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 139, |
| "text": "(wl,.,tl,~,L)", |
| "ref_id": null |
| }, |
| { |
| "start": 151, |
| "end": 164, |
| "text": "(Wl,n,tl,nlL)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Equation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "= argmazq,. P(L) \u00a2~ argmaxtl ......", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Equation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We break out P(ta,~, Wl,n) as a sequence of the products of tag probability and word probability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ". ~ L P( tl,~ , Wl,~ )", |
| "sec_num": null |
| }, |
| { |
| "text": "rl P(tl,n, Wl,~) = 1-I P( u'iltl,i-l' wl,i-1)P(tiltl'i-l' wx,i ) i=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ". ~ L P( tl,~ , Wl,~ )", |
| "sec_num": null |
| }, |
| { |
| "text": "By approximating word probability as constrained only by its tag, we obtain equation 1. Equation (1) yields various types of stochastic taggers. For example, bi-gram and tri-gram models approximate their tag probability as P(tilti-1) and P(tilti_l,ti_.), respectively. In the rest of the paper, we assume all tagging methods share the word model P(wilti) and differ only in the tag model", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ". ~ L P( tl,~ , Wl,~ )", |
| "sec_num": null |
| }, |
| { |
| "text": "P( ti ltl,i-1, Wl,i ). argmaxt ........ eL l\"I P(ti[tl,i-a' wi.i)P(wilti) (1) i=1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ". ~ L P( tl,~ , Wl,~ )", |
| "sec_num": null |
| }, |
| { |
| "text": "To construct a tag model that captures exceptional connections, we have to consider word-level context as well as tag-level. In a more general form, we introduce a tag set that has a hierarchical structure. Our tag set has a three-level structure as shown in Figure 1 . Tile topmost and the second level of the hierarchy are part-of-speech level and part-of-speech subdivision level respectively. Although stochastic taggers usually make use of subdivision level, part-of-speech level is remarkably robust Our objective is to construct a tag model that precisely evaluates P(tiltl,i-1, Wl,i) (in equation 1) by using the threelevel tag set. To construct this model, we have to answer the following questions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 259, |
| "end": 267, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Hierarchical Tag Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "1. Which level is appropriate for t i .9 2. Which length is to be considered for tl,i-1 and wl,i ?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical Tag Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": ":3. Which level is appropriate for tl,i-1 and wl,i ?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical Tag Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "To resolve the first question, we fix ti at subdivision level as is done in other tag models. The second and third questions are resolved by introducing hierarchical tag context trees and mistake-driven mixture method that are respectively described in Section 3 and 4. Before moving to the next section, let us define the basic tag set. If all words are considered context candidates, the search space will be enormous. Thus, it is reasonable for the tagger to constrain the candidates to frequent open class words and closed class words. Tile basic tag set is a set of tile most detailed context elements that comprises the words selected above and part-of-speech subdivision level.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical Tag Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "A hierarchical tag context tree is constructed by a two-step methodology. The first step produces a context tree by using tile basic tag set. The second step then produces the hierarchical tag context tree. It generalizes the basic tag context tree and avoids over-fitting the data by replacing excessively specific context in the tree wi4h more general tags.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical Tag Context Tree", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Finally, the generated tree is transformed into a finite automaton to improve tagging efficiency (Ron et al., 1997) .", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 115, |
| "text": "(Ron et al., 1997)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical Tag Context Tree", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this section, we construct a basic tag context tree. Before going into detail of the algorithm, we briefly explain the context tree by using a simple binary case. The context tree was originally introduced in the field of data compression (Rissanen, 1983; Willems et al., 1995; Cover and Thomas, 1991) to represent how many times and in what context each symbol appeared in a sequence of symbols. Figure 2 exemplifies two context trees comprising binary symbols 'a' and 'b'. T(4) is constructed from the sequence 'baab'and T(6) from 'baabab '. The root node of T(4) explains that both 'a'and 'b ' appeared twice in 'baab' when no consideration is taken of previous symbols. The nodes of depth 1 represent an order 1 (bi-gram) model. The left node of T(4) represents that both 'a' and \"b' appeared only once after symbol 'a', while the right node of T(4) represents only 'a' occurred once after 'b '. In the same way, the node of depth 2 in T(6) represents an order 2 (tri-gram) context model. It is straightforward to extend this binary tree to a basic tag context tree. In this case, context symbols 'a' and 'b\" are replaced by an element of the basic tag set and the frequency table of each node then consists of the part-of-speech subdivision set.", |
| "cite_spans": [ |
| { |
| "start": 242, |
| "end": 258, |
| "text": "(Rissanen, 1983;", |
| "ref_id": null |
| }, |
| { |
| "start": 259, |
| "end": 280, |
| "text": "Willems et al., 1995;", |
| "ref_id": null |
| }, |
| { |
| "start": 281, |
| "end": 304, |
| "text": "Cover and Thomas, 1991)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 400, |
| "end": 409, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Constructing a Basic Tag Context Tree", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The procedure construct-btree which constructs a basic tag context tree is given below. Let a set of subdivision tags to be Sl,--.,sn. Let weight[t] be a weight vector attached to the tth example x(t). Initial values of weight [t] are set to 1.", |
| "cite_spans": [ |
| { |
| "start": 227, |
| "end": 230, |
| "text": "[t]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing a Basic Tag Context Tree", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "1. the only node, the root, is marked with the count table (c(sl,)0,\"-, C(Sn,)~) = (0,'--.0)). 2. Apply the following recursively. Let T(t-1) be a b --(2,2) - b (3,3) . Define the resulting tree to be T(t).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 159, |
| "end": 167, |
| "text": "b (3,3)", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Constructing a Basic Tag Context Tree", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(1,1) (1,o) r(4) a", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing a Basic Tag Context Tree", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(1,2) (2,o) (o,1) (1,o) (o,o) r(6)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing a Basic Tag Context Tree", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "This section delineates how a hierarchical tag context tree is constructed from a basic tag context tree. Before describing the algorithm, we prepare some definitions and notations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing a Hierarchical Tag Context Tree", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Let .4 be a part-of-speech subdivision set. As described in the previous section, frequency tables of each node consist of the set A. At ally node s of a context tree, let n(ats ) and /5(als ) be tile count of element a and its probability, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing a Hierarchical Tag Context Tree", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We introduce an information-theoretical criteria A(sb) (Weinberger et al., 1995) to evaluate the gain of expanding a node s by its daughter sb.", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 80, |
| "text": "(Weinberger et al., 1995)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "p(ats) _ n(als) ~bc_.a n(bls)", |
| "sec_num": null |
| }, |
| { |
| "text": "(2) aCA A(sb) is the difference in optimal code lengths when symbols at node sb are compressed by using probability distribution P(.Is) at node s and P('lsb) at node sb. Thus, the larger A(sb) is, the more meaningful it is to expand a node by sb.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "._k(sb) = Z n(alsbll\u00b0g~ )", |
| "sec_num": null |
| }, |
| { |
| "text": "Now, we go back to the hierarchical tag context tree construction. As illustrated in Figure 3 , the generation process amounts to the iterative selection of b out of word level, subdivision, part-of-speech and null (no expansion). Let us look at the procedure from the information-theoretical viewpoint. Breaking out equation 2 ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 85, |
| "end": 93, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "._k(sb) = Z n(alsbll\u00b0g~ )", |
| "sec_num": null |
| }, |
| { |
| "text": "Because the KL divergence defines a distance measure between probability distributions, P(.]sb) and P(.Is), there is the following trade-off between the two terms of equation 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "._k(sb) = Z n(alsbll\u00b0g~ )", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 The more general b is, the more subdivision symbols appear at node sb.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "._k(sb) = Z n(alsbll\u00b0g~ )", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 The more specific b is, the more /~(-[s) and P(.Isb) differ.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "._k(sb) = Z n(alsbll\u00b0g~ )", |
| "sec_num": null |
| }, |
| { |
| "text": "By using the trade-off, the optimal level of b is se-\u2022lected. Mistake-Driven Mixture of Hierarchical", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "._k(sb) = Z n(alsbll\u00b0g~ )", |
| "sec_num": null |
| }, |
| { |
| "text": "Up to this section, we introduced a new tag model that uses a single hierarchical tag context tree to cope with the exceptional connections that cannot be captured by just part-of-speech level. However, this approach has a clear limitation; the exceptional connections that do not occur so often cannot be detected by the single tree model. In such a ease, the first term n(sb) in equation 3is enormous for general b and the tree is expanded by using more general symbols.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tag Context Trees", |
| "sec_num": null |
| }, |
| { |
| "text": "To overcome this limitation, we devised the mistake-driven mixture algorithm summarized in Table 4 which constructs T context trees and outputs the final tag model. mistake-driven mixture sets the weights to 1 for all examples and repeats the following procedures T times. The algorithm first construct a hierarchical context tree by using the current weight vector. Example data are then tagged by the tree and the weights of correctly handled examples are reduced by equation (4). Finally, the final tag model is constructed by mixing T trees according to equation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tag Context Trees", |
| "sec_num": null |
| }, |
| { |
| "text": "By using the mistake-driven mixture method, the constituents of a series of hierarchical tag context trees gradually change from broad coverage tags (e.g.,noun) to specific exceptional words that cannot be captured by part-of-speech and subdivisions. The method, by mixing different levels of trees, incorporates not only frequent connections but also infrequent ones that are often considered to be collocational without over-fitting the data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(5).", |
| "sec_num": null |
| }, |
| { |
| "text": "We performed an preliminary evaluation using the first 8939 Japanese sentences in a year's volume of newspaper articles (Mainichi, 1993) . We first automatically segmented and tagged these sentences and then revised them by hand. The total number of words in the hand-revised corpus was 226162. We trained our tag models on the corpora with every tenth sentence removed (starting with the first sentence) and then tested the removed sentences. There were 22937 words in the test corpus.", |
| "cite_spans": [ |
| { |
| "start": 120, |
| "end": 136, |
| "text": "(Mainichi, 1993)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminary Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As the first milestone of performance, we tested a hand-crafted tag model of JUMAN (Kurohashi et al., 1994) , the most widely used Japanese part-ofspeech tagger. The tagging accuracy of JUMAN for the test corpus was only 92.0 %. This shows that our corpus is difficult to tag because the corpus contains various genres of texts; from obituaries to poetry.", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 107, |
| "text": "JUMAN (Kurohashi et al., 1994)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminary Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Next. we compared the mixture of bi-grams and the mixture of hierarchical tag context trees. In this experiment, only post-positional particles and auxiliaries were word-level elements of basic tags and all other elements were subdivision level. In contrast, bi-gram was constructedby using subdivision level. We set the iteration number T to 5. The results of our experiments are summarized in Figure 4 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 395, |
| "end": 403, |
| "text": "Figure 4", |
| "ref_id": "FIGREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Preliminary Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As a single tree estimator (Number of Mixture = 1), the hierarchical tag context tree attained 94.1% accuracy, while bi-gram yielded 93.1%. A hierarchical tag context tree offers a slight improvement, but (< pt,dt, wt >) in which Pt, dt and wt represent part-of-speech, subdivision and word, respectively. Follow ; gt_l, Xt_2, ..., xt_(i_l) and Reach leaf node s low = swt-i, high = sdt-i while (max(iN(low), ,.3,(high) ", |
| "cite_spans": [ |
| { |
| "start": 205, |
| "end": 220, |
| "text": "(< pt,dt, wt >)", |
| "ref_id": null |
| }, |
| { |
| "start": 306, |
| "end": 314, |
| "text": "Follow ;", |
| "ref_id": null |
| }, |
| { |
| "start": 315, |
| "end": 320, |
| "text": "gt_l,", |
| "ref_id": null |
| }, |
| { |
| "start": 321, |
| "end": 326, |
| "text": "Xt_2,", |
| "ref_id": null |
| }, |
| { |
| "start": 327, |
| "end": 331, |
| "text": "...,", |
| "ref_id": null |
| }, |
| { |
| "start": 332, |
| "end": 340, |
| "text": "xt_(i_l)", |
| "ref_id": null |
| }, |
| { |
| "start": 395, |
| "end": 419, |
| "text": "(max(iN(low), ,.3,(high)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminary Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": ") >_ Threshold) { if(iN(low) > A(high))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminary Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Expand the tree by the node low else if (high==spt-i ) Expand the tree by the node high else low = sdt_i, high = spt-i } t=t+l while(xt is not empty) When we turn to the mixture estimator, a great difference is seen between hierarchical tag context trees and bi-grams.", |
| "cite_spans": [ |
| { |
| "start": 40, |
| "end": 54, |
| "text": "(high==spt-i )", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminary Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u00a2~t = i'--'d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminary Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The hierarchical tag context trees produced by the mistake-driven mixture method, greatly improved the accuracy and overfitting data was not serious. The best and worst performances were 96.1% (Number of Mixture = 3) and 94.1% (Number of Mixture = 1), respectively. On the other hand, the performance of the bi-gram mixture was not satisfactory. Tile best and worst performances were 93.8 % (Number of Mixture = 2) and 90.8 % (Number of Mixture = 5), respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminary Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "From the result, we may say exceptional connections are well captured by hierarchical context trees but not by bi-grams. Bi-grams of subdivision are too general to selectively detect exceptions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminary Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Although statistical natural language processing has mainly focused on Maximum Likelihood Estimators, (Pereira et al., 1995) proposed a mixture approach to predict next words by using the Context Tree Weighting (CTW) method . (Willems et al., 1995) . The CTW method computes probability by mixing subtrees in a single context tree in Bayesian fashion. Although the method is very efficient, it cannot be used to construct hierarchical tag context trees.", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 124, |
| "text": "(Pereira et al., 1995)", |
| "ref_id": null |
| }, |
| { |
| "start": 226, |
| "end": 248, |
| "text": "(Willems et al., 1995)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Various kinds of re-sampling techniques have been studied in statistics (Efron, 1979; Efron and Tibshirani, 1993) and machine learning (Breiman, 1996; Hull et al., 1996; Freund and Schapire, 1996a) .", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 85, |
| "text": "(Efron, 1979;", |
| "ref_id": null |
| }, |
| { |
| "start": 86, |
| "end": 113, |
| "text": "Efron and Tibshirani, 1993)", |
| "ref_id": null |
| }, |
| { |
| "start": 135, |
| "end": 150, |
| "text": "(Breiman, 1996;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 151, |
| "end": 169, |
| "text": "Hull et al., 1996;", |
| "ref_id": null |
| }, |
| { |
| "start": 170, |
| "end": 197, |
| "text": "Freund and Schapire, 1996a)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In particular, the mistake-driven mixture algorithm g- , 1996a) . The Adaboost method was designed to construct a high-performance predictor by iteratively calling a weak learning algorithm (that is slightly better than random guess). An empirical work reports that the method greatly improved the performance of decision-tree, k-nearestneighbor, and other learning methods given relatively simple and sparse data (Freund and Schapire, 1996b) . We borrowed the idea of re-sampling to detect exceptional connections and first proved that such a re-sampling method is also effective for a practical application using a large amount of data. The next step is to fill the gap between theory and practition. Most theoretical work on re-sampling assumes i.i.d (identically, independently distributed) samples. This is not a realistic assumption in partof-speech tagging and other NL applications. An interesting future research direction is to construct a theory that handles Markov processes.", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 63, |
| "text": ", 1996a)", |
| "ref_id": null |
| }, |
| { |
| "start": 414, |
| "end": 442, |
| "text": "(Freund and Schapire, 1996b)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We have described a new tag model that uses mistake-driven mixture to produce hierarchical tag context trees that can deal with exceptional connections whose detection is not possible at part-ofspeech level. Our experinaental results show that combining hierarchical tag context trees with the mistake-driven mixture method is extremely effective for 1. incorporating exceptional connections and 2. avoiding data over-fitting. Although we have focused on part-of-speech tagging in this paper, the mistake-driven mixture method should be useful for other applications because detecting and incorporating exceptions is a central problem in corpus-based NLP. We are now costructing a Japanese dependency parser that employes mistake-driven mixture of decision trees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "T.M. Cover and J.A. Thomas, 1991. Elements of Information Theory. John Wiley & Sons. B. Efron and R. Tibshirani, 1993 ", |
| "cite_spans": [ |
| { |
| "start": 5, |
| "end": 14, |
| "text": "Cover and", |
| "ref_id": null |
| }, |
| { |
| "start": 15, |
| "end": 78, |
| "text": "J.A. Thomas, 1991. Elements of Information Theory. John Wiley &", |
| "ref_id": null |
| }, |
| { |
| "start": 79, |
| "end": 97, |
| "text": "Sons. B. Efron and", |
| "ref_id": null |
| }, |
| { |
| "start": 98, |
| "end": 117, |
| "text": "R. Tibshirani, 1993", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Bagging predictors", |
| "authors": [ |
| { |
| "first": "Leo", |
| "middle": [], |
| "last": "Breiman", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Machine Learning", |
| "volume": "24", |
| "issue": "", |
| "pages": "123--140", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leo Breiman. 1996. Bagging predictors. Machine Learning, 24(2):123-140, August.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A simple rule-based part of speech tagger", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proc. Third Conference on Applied Natural Language Processin 9", |
| "volume": "", |
| "issue": "", |
| "pages": "152--155", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Brill. 1992. A simple rule-based part of speech tagger. In Proc. Third Conference on Applied Natural Language Processin 9, pages 152-155.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Neil Jacobson, and Mike Perkowits. 1993. Equations for Part-of-Speech Tagging", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "Curtis", |
| "middle": [], |
| "last": "Hendrickson", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proc. 11th AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "784--789", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak, Curtis Hendrickson, Neil Jacob- son, and Mike Perkowits. 1993. Equations for Part-of-Speech Tagging. In Proc. 11th AAAI, pages 784-789.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A stochastic parts program and noun phrase parser for unrestricted text", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Proc. ACL 2nd Conference on Applied Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "126--143", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. W. Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proc. ACL 2nd Conference on Applied Natural Language Processing, pages 126-143.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Hierarchical Tag Set against data sparseness. The bottom level is word level and is indispensable in coping with exceptional and collocational sequences of words.", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Context Trees for 'baab\" and 'baabab' the last constructed tree with counts of nodes z, (c(sl,z),-.., c(sn,z)). After the next symbol whose subdivision is x(t) is observed, generate the next tree T(t) as follows: follow the T(t-1), starting at the root and taking the branch indicated by each successive symbol in the past sequence by using basic tag level.For each node z visited, increment the component count c(x(t),:) by weight[t]. Continue until node w is a leaf node. 3. If w is a leaf, extend the tree by creating new leaves: c(x(t),wsl)=...=c(x(t),wsn) = weight[t], c(x(t),wsl) ..... c(x(t),wsn)=O.", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "text": "as (3), 2x(sb) is represented as the product of the frequencies of all subdivision symbols at node sb and Kullback-Leibler (KL) divergence. n(alsb), P(alsb) A(sb)= n(sb) E --*og --ac_a n(sb) p(als ) = n(sb)~ P(alsb)log P(alsb) ~g.-t P( als ) = n(sb)D~.L(P(.[sb),/~(.[s))", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Constructing Hierarchical Tag Context Tree training examples consist of a sequence of triples, < pt,st,wt >, in which Pt, st and wt represent part-of-speech, subdivision and word, respectively. Eachtime the algorithm reads an example, it first reaches current leaf node s by following the past sequence, computes A(sb), and then selects the optimal b. The initially constructed basic tag context tree is used to compute A(sb)s.", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "num": null, |
| "type_str": "figure", |
| "text": "4", |
| "uris": null |
| }, |
| "FIGREF5": { |
| "num": null, |
| "type_str": "figure", |
| "text": "examples correctly predicted by ht, update the weights vector to be weight[i] = weight[i]flt (4) Output a final tag model h I = ET=l(log~)ht/ET=l(log~)", |
| "uris": null |
| }, |
| "FIGREF6": { |
| "num": null, |
| "type_str": "figure", |
| "text": "............. ................... \u2022 2. j,\" ......................................................... Context Tree Mixture v.s. Bi-gram Mixture was directly motivated by Adaboost (Freund and Schapire", |
| "uris": null |
| }, |
| "TABREF0": { |
| "html": null, |
| "content": "<table><tr><td>Assume that the</td></tr></table>", |
| "type_str": "table", |
| "text": "summarizes the algorithm construct-htree that constructs the hierarchical tag context tree.First, construct-htree generates a basic tag context tree by calling construct-btree.", |
| "num": null |
| }, |
| "TABREF2": { |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">Initialize the weight vector weight[i] =1 for i = 1 ..... N</td></tr><tr><td>Do</td><td>for t = 1,2 ..... T</td></tr><tr><td/><td>Call construct-htree providing it with the weight vector weight D and</td></tr><tr><td/><td>Construct a part-of-speech tagger ht</td></tr><tr><td/><td>Let Error be a set of examples that are not identified by ht</td></tr><tr><td/><td>\u2022 Compute the error rate of hi: et = EicError we*ght[2]/Y\"~i=l weight[i] \u2022 N</td></tr></table>", |
| "type_str": "table", |
| "text": "Algorithm construct-htree Input: sequence of N examples < Pl, dl, wl >, . \u2022., < pN, dN, WN > in which Pi, di and wi represent part-of-speech, subdivision and word, respectively.", |
| "num": null |
| }, |
| "TABREF3": { |
| "html": null, |
| "content": "<table><tr><td>: Algorithm mistake-driven mixture</td></tr><tr><td>not a gret deal\u2022 This conclusion agrees with Schiitze</td></tr><tr><td>and Singer's experiments that used a context tree of</td></tr><tr><td>usual part-of-speech.</td></tr></table>", |
| "type_str": "table", |
| "text": "", |
| "num": null |
| } |
| } |
| } |
| } |