ACL-OCL / Base_JSON /prefixC /json /C94 /C94-1025.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C94-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:49:54.330759Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The described tagger is b,'used on a hidden Markov model and uses tags composed of features such as partof speech, gender, etc. 'l?he contextual probability of a tag (state transition probability) is deduced from the contextual probabilities of its feature-value-pairs. This approach is advantageous when the available training corpus is small and the tag set large, which can be the case with morphologically rich languages.",
"pdf_parse": {
"paper_id": "C94-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "The described tagger is b,'used on a hidden Markov model and uses tags composed of features such as partof speech, gender, etc. 'l?he contextual probability of a tag (state transition probability) is deduced from the contextual probabilities of its feature-value-pairs. This approach is advantageous when the available training corpus is small and the tag set large, which can be the case with morphologically rich languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "'l'he present article describes it probabillstic tagger based on a hidden Marl(or model (IIMM) (Rabiner, 1990 ) and employs tags which are fe,'iture structures. Their features concern part-of-speech (POS), gel,der, number, etc. and tlave only atouiie vahles.",
"cite_spans": [
{
"start": 88,
"end": 94,
"text": "(IIMM)",
"ref_id": null
},
{
"start": 95,
"end": 109,
"text": "(Rabiner, 1990",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "Usually, the contextual probability of a tag (state transition probability) is estimated dividing a trigrain frequency by a bigram frequency (second order II MM). With a large tag set resulting froin tire fact that the tags colitain besides or the POS a lot of lnorphological information, and with only a slnall training corpus available, most of these frequencies are too low for an exact estimation of contextual probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "Our feature structure tagger esthnates these probabilities by connecting contextual probabilities of the single fealvre-wdue-pai,'s (rv-pairs) of the tags (cf. sec.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "Starting point for the iulph;nientation of the ['eature structure tagger was a second-order-li'IvlM tagger (trigrams) b~med on a modilied version of the Viterbi algorithm (Viterbi, 1967; Chllrch, 1988) which we had earlier implemented in C (l(empe ,1994) . 'Flier{: we modified tim calculus of the contextual probabilities of the tags in the above-described way (cf see. 4).",
"cite_spans": [
{
"start": 171,
"end": 186,
"text": "(Viterbi, 1967;",
"ref_id": "BIBREF9"
},
{
"start": 187,
"end": 201,
"text": "Chllrch, 1988)",
"ref_id": null
},
{
"start": 238,
"end": 254,
"text": "C (l(empe ,1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "A test of both tatters under the sanle conditions Oli a French corpus 1 has shown that tile feature structure tagger is clearly better when tim available training col pus is small and the tag set is large but the tags are decomlmsable into relatively few fv-pairs. 'l'he hitter can be the case with morphologically rich languages when the tags contain a lot of morphological inforniation (cf. see. 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "11 inll nmch obliged to Achim Stein and Leo W,tuner, lto-nl~.UC~: l)ept., Univ. Stuttglirt, Gel'lll&liy, for t~rovidlng the cor-ptlS and it dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "In order to ~Lssign tags to a word sequence, a IIMM can be used where tim tagger selects among all possible tag sequences tile most probable one (Garside, Leech and Saulpson, 1987; (Tlnlrch, 1988; Brown e.t al., 1989; Rabiner, 1990) . The joint probability of a tag sequence l --I0...tN_ 1 given a word sequence lg., : ~v0...lON_-l is hi the case of a second order IIMM:",
"cite_spans": [
{
"start": 145,
"end": 180,
"text": "(Garside, Leech and Saulpson, 1987;",
"ref_id": null
},
{
"start": 181,
"end": 196,
"text": "(Tlnlrch, 1988;",
"ref_id": null
},
{
"start": 197,
"end": 217,
"text": "Brown e.t al., 1989;",
"ref_id": null
},
{
"start": 218,
"end": 232,
"text": "Rabiner, 1990)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MATHEMATICAL BACK-GR.OUND",
"sec_num": "2"
},
{
"text": "*'(l, ,Z,) := ~t,, ,, \u2022 p(,v0lZ0). J,(ivl lZ,) ' N-1 l-{ (p(,.,I',),(l,I (1) i=2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MATHEMATICAL BACK-GR.OUND",
"sec_num": "2"
},
{
"text": "The term rqo t, stands for tim initial slate probability, i.e. the probability that the sequence begins with the first two tags. N is tim nunlber of words in the sequence, i.e. the corpus size. \"Phe term p(w\u00a2]ll) is the probability of a word w\u00a2 in the context of the assigned tag tl. it is called observation symbol prolmbility (lexical probability) and can be estimated by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MATHEMATICAL BACK-GR.OUND",
"sec_num": "2"
},
{
"text": "f(wl ll) t,(,,,~lt~) --f(t~) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MATHEMATICAL BACK-GR.OUND",
"sec_num": "2"
},
{
"text": "The second order state transition probability (contextual probability) 1,(t~ I t~-2 re-.t) in formula (l) expresses how probable it; is that the tag tl appears in the context of its two preceding tags li-', all(] ti-]. It is usually esthnate.d as the ratio of the frequency of the trigram (ll-'2, t~-l,t;) in a given training corpus to the. I'requency of the higram (li_2,li~l} ill {,lie sallie corpllS:",
"cite_spans": [
{
"start": 289,
"end": 305,
"text": "(ll-'2, t~-l,t;)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MATHEMATICAL BACK-GR.OUND",
"sec_num": "2"
},
{
"text": "f(ti-.~ ti-~ ti)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MATHEMATICAL BACK-GR.OUND",
"sec_num": "2"
},
{
"text": "With a large tag set and a relatively small handtagged training corpus forinula (3) has an iinl)ortant disadvantage: The maioril,y of transition probabilities cannot be estimated exactly because most of the possible trigrams (sequences of three consecutive tags) will not appear at all or only a few tilnes a.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MATHEMATICAL BACK-GR.OUND",
"sec_num": "2"
},
{
"text": "|I10llr exarrlple we have a 1,'rencli training corplls of 10,000 words tagged with a set of 386 different tags whMl could forrn a8a a = 57,512,450 different trigrams, but because of the corpus size no more than 10,000-2 trigranrs can appear. Actually, their nuinber was only 4,8[5, i.e. 0.008 % of all possible ones, because some of them appeared more than once (table 1) When we divide e.g. a trigram frequency 1 by a bigram frequency 2 according to formula (3) we gel tbe probability p=0.5 but we cannot trust it to be exact because the frequencies it is based on are too small.",
"cite_spans": [],
"ref_spans": [
{
"start": 362,
"end": 371,
"text": "(table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "MATHEMATICAL BACK-GR.OUND",
"sec_num": "2"
},
{
"text": "We can take advantage of the fact that the 386 tags are constituted by only 57 different fv-pairs concerning POS, gender, number, etc. If we consider probabilistic relations between single fv-pairs then we get higher frequencies ( fig. 2 ) and tbe resulting probabilities are more exact.",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 237,
"text": "fig. 2",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "MATHEMATICAL BACK-GR.OUND",
"sec_num": "2"
},
{
"text": "From the equations ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MATHEMATICAL BACK-GR.OUND",
"sec_num": "2"
},
{
"text": "p(tilCi) = p(clolCi) \" ~[ p elk Ci 0 elj (6) k=t \\ I j=0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MATHEMATICAL BACK-GR.OUND",
"sec_num": "2"
},
{
"text": "Tire latter formula 3 describes the relation between the contextual probability of a tag and the contextual probabilities of its fv-pairs. The unification of morphological features inside a noun phrase is accomplished indirectly, hr a given context of D-pairs the correct fv-pair obtains the probability p=l and therefore will not influence tim probability of the tag to which it belongs (e.g. p~ ( 0num ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MATHEMATICAL BACK-GR.OUND",
"sec_num": "2"
},
{
"text": "In the training process we are not interested in analysing and storing the contextual probabilities (state transition probabilities) of whole tags but of single fv-pairs. We note them in terms of probabilistic feature relations (PFI:~):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "Vr'l~: ( e, I c,'\"~ ; p(~,Ic~ \"~) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "which later, in the Lagging process, will be combined in order to obtain the contextual tag probabilities. The term el in formula 7is a fv-pair. G~ \"~ is a reduced context which contains only a subset of the fv-pairs of a really appearing context Ci ( fig. 1 ). C/~ is obtained from Ci by eliminating all fv-pairs which do not influence the relative frequency of e,', according to the condition:",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 258,
"text": "fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(e,'lC~ '\"b) / p(e, lC~) C [1 -e, 1 + ~]",
"eq_num": "(8)"
}
],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "The considered D-pair has nearly 4 the same probability in the complete and in the reduced contexts, i.e. Ci does not supply more information abont the probability of el than C/~''b does. fig. 2) .",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 195,
"text": "fig. 2)",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "The presence of inum:SG in tag ti-1 does not influence the probability of Ogen:FEM in tag I i. Therefore lnum:SG eau be eliminated. Only fv-pairs which really have an influence remain in the context. The reduced context C~ \"b with less D-pairs, which we obtain this way, is more general (fig. lb) .",
"cite_spans": [],
"ref_spans": [
{
"start": 287,
"end": 296,
"text": "(fig. lb)",
"ref_id": null
}
],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "In the given training corpus, the probability of Ogen:FEM in the context CZ \"b is p0=170/174=0.997 (el. P0 in PFR0 in fig. 2 ), which is near to p~=l. The reduced context C~ ''~ is used to form a PFR which will be stored.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 124,
"text": "fig. 2",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "We. see in the use of reduced contexts instead of complete ones two advantages:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "(1) A great number of complete contexts containing many fv-pairs can lead after eliminatim, of irrelevant fv-pairs to the same PFR, which makes the nmnber of all possible PFlks much smaller than the number of all possible trigrams (cf. sec. 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "(2) \"['he probability of a fv-pair can be estimated more exactly in a reduced context than in a complete one because of the higher frequencies in the first case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "The Generation of Pl., 'l{s In the training process we first extract from a training corpus a set of trigrams where the tags are split up into their fv-pairs. From these trigrams a set of PFILs is generated separately ['or every fvqmlr ei. We examined four difl'erent methods for this procedure:",
"cite_spans": [
{
"start": 23,
"end": 27,
"text": "'l{s",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "Method 1-3: For every trigram we generate all possible subsets of its fv-pairs. Many trigrams, e.g. if they dillk'.r in only one fv-pair, have most of their subsets of fv-pairs in coil,IliOn. Both the complete trigrams and the subsets, constitute together the set, of contexts and subcontexts (Ci and C/'''~) wherein a fv-pair couhl appear. To generate Pl:lLs for at giw'.n fv-pair, we preselect and mark those (sub-)contexts which are supposed to have an intluence on the contextual probability of the. fv-pair. A (sub-)context will not be preselected if its frequency is smaller than a defined threshold. We use dilferent ways for the preselection:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "Melhod 1: A (sub-)context will be preseleeted if the considered D-pair itself or all fv-p;dr l)etonging to the same feature type ew'.r appears in this (sul)-)context. E.g., if gen:MAS appears in a certain (sub-)context the,, this (sub-)context will l,e preselected for gen:l:EM too. Furthermore, it is possible to impose special conditions on the preselection, e.g. that a (sub-)context can only be preselected if it contains a POS feature in tag tl and ti-1 (cf. lit. l;t: Opos and Ipos).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "Method 2: In order to preselect (sub-)contexts for an fv-pair, we generate a decision tree r' (Quinlan, I983) where the feature of the fv-pair, e.g. ten, hum el.e, serves to classify all existing (sub-)contexts. E.g., hum prodt, ces three classes of contexts: those containing the fwpair Onum:SG, those with Onum:PL and those without a Onum feature. We assign to tile tree nodes other features than this upon which the cl~ussification is based. The root node is labeled with the feature from which we expect most information al)out the probability of the currently considered feature. The values of the rout node feature are assigned to the I)ranches starting at the root node. ~,h.~ continue the. branching until there remain no features will, an expected information gain and a frequency higher than defined Ssuggestedlw lIehnut Schmld, [MS, Univ. Stuttgart, Ger-Ilk, lilly, lear reasolls of space we explain only how we etnploy decision trees for our purposes. For details about the automatic generation of such trees see Quinhm (1983) . threshohls. To ever), leaf of the tree corresponds a (sul>)context which will be marked and thus preseletted for further analysis.",
"cite_spans": [
{
"start": 94,
"end": 103,
"text": "(Quinlan,",
"ref_id": null
},
{
"start": 104,
"end": 109,
"text": "I983)",
"ref_id": null
},
{
"start": 1025,
"end": 1038,
"text": "Quinhm (1983)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "Method 3: For each fvq)air concerning POS we preselect every (sub-)context containing only I'OS features ht tag tl-2 ;t,,d ti-1 (classical I'OS trigram), e.g. 2pos:PREP lpos:DET tbr Opos:NOUN. For the other fv-p;tirs we mark every (sub-)conl;ext containing any fv-pair of the same type in the previous tag ti-1 and ally POS features in tag li_ 1 alld Ii, e.g. lpos:DET Igen:FL'M Opos:NOUN for @en:I:EM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "Witl, the methods 1-3, we next eliminate frolll ev~ cry preselected (sul>)context all fv-pairs which in the above described sense do not intluenee the relative frequency of the currently considered fv-pair (eq. 8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "Method 4: l:ronl the set of trigrams extracted from a training corpus we generate separately for every fvpair, a binaryd>ranched decision tree which shall tiescribe wtrious contextual probabilities of this fv-pair. The tree is generated on a modi[ied version of the II)3 algorithm (Quildan, 1983) and is similar to the one desr.rlbed by Schmid (1994) .",
"cite_spans": [
{
"start": 281,
"end": 296,
"text": "(Quildan, 1983)",
"ref_id": null
},
{
"start": 337,
"end": 350,
"text": "Schmid (1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "We start with a binary classification of all trigrams based on the considered D-palr. l'].g., a classification for :len:l\"EM will divide the set of trigrams in two subsets, one where the trigrams contain Ogen:l\"EM in the tag Ii and one where they do not. Ogen:l,'EM (Every number is a probability of Ogeu:l\"ltM in the context described by the path from the root node to the node labeled with the munl>er.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "The tree is built up recursiw~ly ( fig. 3 ). At each step, i.e. with the construction of each node, we test which one of the other D-pairs delivers most infof matioi! concerning the abow>described chmsillcation. The current node will be labeled with this fv-pair. One of its two branches concerns the trigrams which con~ p( 0gen:FEM 0num:SG 0pos:ADJ I lgen:FEM lnum:SG lpos: The position index at the beginning of every feature-v',due-pair indicates the tag to which it belongs; e.g. Ogen:FEM belongs to t~tg li and 2num:SG to ll-2. tain the D-pair, the other branch concerns tim trigrams which do not contain it. The recursive expansion of the tree stops if either the information gained by consulting further fv-pairs or the frequencies upon which the calculus is based are smaller than defined thresholds.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 41,
"text": "fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "TRAINING ALGORITHM",
"sec_num": "3"
},
{
"text": "Starting point for the implementation of a feature structure tagger was a second-0rdcr-IIMM tagger (trigrams) based on a modified version of the Viterbi algorithm (Viterbi, 1967; Church, 1988) which we had earlier implemented in C (Kempe ,1994 ). There we replaced the function which estimated the contextual probability of a tag (state transition probability) hy dividing a trigram frequency by a bigram frequency (eq. 3) with a flmction which accomplished this calculus either using PF1Ls in the above-described way (eq.s 6, 7) or by consulting a decision tree ( fig. 3) .",
"cite_spans": [
{
"start": 163,
"end": 178,
"text": "(Viterbi, 1967;",
"ref_id": "BIBREF9"
},
{
"start": 179,
"end": 192,
"text": "Church, 1988)",
"ref_id": "BIBREF1"
},
{
"start": 231,
"end": 243,
"text": "(Kempe ,1994",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 565,
"end": 572,
"text": "fig. 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "TAGGING ALGORITHM",
"sec_num": "4"
},
{
"text": "To estimate the contextual probability of a tag we have to know the contextual probabilities of its fvpairs in order to multiply them (eq. 6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TAGGING ALGORITHM",
"sec_num": "4"
},
{
"text": "Using PFRs generated by roof:hod 1 or 2, when e.g looking for the probability p~(0pos:ADJ I...) from Ilgure 2, we may find in the list of PFRs, instead of a PFR, which would directly correspond (but is not stored), the two PFRs As there exists no mathematical relation between these three probabilities, we simply average Pt and P2 to get p l)ecause this gives as good tagging results as a nmnber of other more complicated approaches which we examined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TAGGING ALGORITHM",
"sec_num": "4"
},
{
"text": "PFRs generated by method 3 do not create this problem. For every complete context only one PFIL is stored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TAGGING ALGORITHM",
"sec_num": "4"
},
{
"text": "When we use the set of decision trees generated by method 4, we obtain for every fv-palr in every possible context only one probability by going down on the relevant branches until a probability information is reached.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TAGGING ALGORITHM",
"sec_num": "4"
},
{
"text": "In opposition to tile PFRs of tile other methods, the decisiou trees also contain negative information al)ont the contexL of an fv-l)air, i.c. not only which fv-llairs have to be in the context but also which ones nmst bc absent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TAGGING ALGORITHM",
"sec_num": "4"
},
{
"text": "In tile training arm tagging process we experimented with different values for parameters like: minimal admitted frequency for preselection, admitted percentua] difference c between probabilities considered to bc equal, etc. (cf. see. 3). The feature structure tagger was trained on the French 10,000 words corpus already mentioned ill table 1, with the fonr different training methods (see. 3). When tagging a 6,000 words corpus 6 with an average ambiguity of 2.63 tags per word (after the dictionary tT--4 \"traditional\" tlMM-tagger, IpT--+ \"Tagger\" considering ~nly lexical prohahilitles, ]sTl..4 ---* feature structure tagger trMned with method 1..,1, HMM order I ~ blgrams, 2 ~ trigrams Table 2 : Comparison of the tagging accuracy with different taggers, corpora, tag sets and IIMM orders Comparatively, we used a \"traditional\" II/VlMtagger (cf. see. 4) on the same training and test corpora and got an accuracy of 83.23 % 7, i.e. the error rate was about 50 % higher than with the feature structure tagger (table 2) .",
"cite_spans": [],
"ref_spans": [
{
"start": 691,
"end": 698,
"text": "Table 2",
"ref_id": null
},
{
"start": 1012,
"end": 1021,
"text": "(table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "TAGGING RESULTS",
"sec_num": "5"
},
{
"text": "When we used a tool which always selects the lexitally most probable tag without considering the context we obtained an accuracy of 83.81%, which is even better than with the \"traditional\" IIMM-tagger.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TAGGING RESULTS",
"sec_num": "5"
},
{
"text": "Provided with enough training data and working on a small tag set, our \"traditional\" tagger got an accuracy of 96.16 % (Kempe ,1994) , which is usual in tiffs case (Cutting et a1.,1992) . The English test cori)us we used here had an average amt)iguity of 2.61 tags per word which is amazingly similar to the aml)iguity o[\" the French corpus.",
"cite_spans": [
{
"start": 119,
"end": 132,
"text": "(Kempe ,1994)",
"ref_id": "BIBREF5"
},
{
"start": 164,
"end": 185,
"text": "(Cutting et a1.,1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TAGGING RESULTS",
"sec_num": "5"
},
{
"text": "The feature structure tagger is clearly bel, l.er when the available training corpus is small and the tag set large but the tags are decomposal)le into few fv-pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TAGGING RESULTS",
"sec_num": "5"
},
{
"text": "We intend to search for other similar models while keeping in mind the basic idea described above: Splitring up a tag into D-pairs and deducing it, s contextual probability from the contextual probabilities of its fvpairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FURTHEI~ RESEARCH",
"sec_num": "6"
},
{
"text": "Furthermore, it may be preferable to split up the tags only when tim frequencies are too small s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FURTHEI~ RESEARCH",
"sec_num": "6"
},
{
"text": "'2 A detaihM descrilltlon of pro] ileli/S egnlsed by sniall and ,.4el'O frequencies was given by Clah~ andChurch (1989)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A small change in the probability caused by the elimination of fv-pairs from the context is admitted if it does not exceed a defined sman percentage e. (We used ~ --3%.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "7 For a similar experiment for Qerman (20,000 words training corpus, 689 tags, trigrams) an accuracy of 72.5 % has been reported (Wothke et al., 1993, p. 21).Ssuggestcd by 'red Briscoe, Rank Xerox Research Centre, Grenoble, France",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Statistical Approach to Machine q~ranslation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P.F. et al. (1989). A Statistical Approach to Machine q~ranslation. Technical l/.epo,'t, I/.C 14773 (~//-66226) 7/17/89, IBM Research l)ivision.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Stochastic Parts Program and Noun Phra.se Parser for Unrestricted Text",
"authors": [
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
}
],
"year": 1988,
"venue": "Proc. 2rid Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "36--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K.W. (1988). A Stochastic Parts Program and Noun Phra.se Parser for Unrestricted Text. In Proc. 2rid Conference on Applied Natural Language Processing, ACL., pp. I36-143.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Practical Part-of-Sl)eech Tagger",
"authors": [
{
"first": "I",
"middle": [
")"
],
"last": "Cutting",
"suffix": ""
}
],
"year": 1992,
"venue": "Pwc. 3rd Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cutting, I). et al. (1992). A Practical Part-of-Sl)eech Tagger. In Pwc. 3rd Conference on Applied Natural Language Processing, ACL. Trento, Italy.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "What's Wrong with Adding One",
"authors": [
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
},
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
}
],
"year": 1989,
"venue": "Statistical Research Reports",
"volume": "",
"issue": "90",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale, W.A. and Church, K.W. (1989). What's Wrong with Adding One?. Statistical Research Reports, No. 90, AT&T Bell Laboratories, Murray Ilill.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The. Computational Analysis of English: A Corpus-based Approach. London: I,ongman",
"authors": [
{
"first": "I",
"middle": [
"L"
],
"last": "Garside",
"suffix": ""
},
{
"first": "C",
"middle": [
"I"
],
"last": "Leecll",
"suffix": ""
},
{
"first": "(",
"middle": [
"I"
],
"last": "Sampson",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Garside, IL, Leecll, CI. and Sampson, (I. (1987). The. Computational Analysis of English: A Corpus-based Approach. London: I,ongman.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Prol)abilistic Tagger aud an Analysis of Tagging Errors",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kempe",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kempe, A. (1994). A Prol)abilistic Tagger aud an Analysis of Tagging Errors. Research Report. IMS, Univ. of Stuttgart.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning Efficient Classi[ication Procedures and Their Application to Chess End Qames",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1983,
"venue": "Machine Learning: An arlificial inlelligence approach",
"volume": "",
"issue": "",
"pages": "463--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinlan, J.R. (1983). Learning Efficient Classi[ica- tion Procedures and Their Application to Chess End Qames. In Michalski, R., Carbonell, J. and Mitchell T. (Eds.) Machine Learning: An arlificial inlelli- gence approach, pp. 463-482. San Mateo, California: Morgan l(aufmann.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A 'Bltorial on llidden Markov Models and Selected Applications in Speech Recognition",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Rablner",
"suffix": ""
}
],
"year": 1990,
"venue": "Readings in Speech Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rablner, L.R.. (1990). A 'Bltorial on llidden Markov Models and Selected Applications in Speech Recog- nition. In Waibel, A and Lee, K.F. (Eds.) Readings in Speech Recognition. San Mateo, California: Mof gas Kanfinann.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "!14). l'robabilistic Part-of-Speach Tagging llsing ])eeision Trees. Research I",
"authors": [
{
"first": "",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schmid, 1[. (19!14). l'robabilistic Part-of-Speach Tag- ging llsing ])eeision Trees. Research I/.eport. IMS, Univ. of Stuttgart.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Error Bounds for Convolutional Codes and an Asymptotieal Optimal l')ecoding Algorithm",
"authors": [
{
"first": "A",
"middle": [
"J"
],
"last": "Viterbi",
"suffix": ""
}
],
"year": 1967,
"venue": "Proceedings oflEEE",
"volume": "61",
"issue": "",
"pages": "268--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viterbi, A.J. (1967). Error Bounds for Convolutional Codes and an Asymptotieal Optimal l')ecoding Algo- rithm. In Proceedings oflEEE, vo]. 61, pp. 268-278.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistically Based Automarie Tagging of German Text Corpora with Partsof-Speech -Some Experinmnl, s. Research ll,eport, Doe",
"authors": [
{
"first": "K",
"middle": [],
"last": "Wothke",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wothke, K. et al. (1993). Statistically Based Auto- marie Tagging of German Text Corpora with Parts- of-Speech -Some Experinmnl, s. Research ll,eport, Doe. No. TR 75.93.02. Ileidelberg Scientific Center, IBM Germany.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "the context of/~ and contains tile tags t;_.~ and ti-1 follows",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": ":SG [...) = 1 in fig. 2). A wrong fv-pair would obtain p=0 and make the whole tag impossible. asugg ested bY Mats Rooth, IMS, Unlv.Stuttgart, Germany",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "(a) Complete context Ci and (b) reduced context C/'\"b of the feature-value-pair el = Ogen:FEM In the example (fig. la) we consider tile fv-pair Ogen:l,'EM. Within the given training corpus, its probability ill tile complete context Ci, i.e. in the context of all tile other fv-pairs of figure la, is p~=44/44=I (of. p~ in",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "tre.e for the fv-pair",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "NOUN 2gen:FEM 2num:SG 2pos:DET 2typ:DEF) = 44/298 = 0.148 p~( 0gen:FEM [ 0num:SG 0pos:ADJ lgen:FEM hmm:SG lpos:NOUN 2gen:FEM 2num:SG 2pos:DET 2typ:DEF) = 44/44 = 1.0 PFRo \u2022 ( 0gen:l\"EM ] 0pos:A1)J lgen:FE'M ; p0 = 170/174 = 0.977) p~'( 0num:SG [ 0gen:FEM 0pos:Al)J lgen:FEM lnum:SG Ipos:NOUN 2gen:FEM 2num:SG 2pos:l)ET 2typ:DEF) = 44/44 = 1.0 PFllq : ( 0num:SG [ 0pos:Al)J lnurn:SG 2pos:l)ET ; p~ = 90/96 = 1.0) p~( 0pos:ADJ [ lgen:FEM lnum:SG lpos:NOUN 2gen:PEM 2num:SG 2pos:DET 2typ:DEF) = 44/298 --= 0.148 PFR~ : ( 0pos:ADJ [ lgen:FEM liras:NOUN 2pos:DET ; p2 = 69/465 = 0.148) 2 H pi \"~ 0.145",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Decomposition amt reconstruction of a contextual tag probability (state transition probability) using probabilislic feature relations(PFH,)",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "0pos:ADJ [ lgen:FEM lpos:NOUN 2pos:Dl;;T; Pl -----0.148) (0pos:ADJ [ 0num:SG ll|llln:S(~ lsyn:NOUN 2syn:l)ET; p~ = 0.414> Both of them contain subsets of tile fv-pairs of the required complete context and could therefore both be applied. In such c;*se we laced to know how to combine Pl and p2 in order to gel; p (=p.~ in fig. 2).",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td>: Trigram count from a French train-</td></tr><tr><td>ing corpus of 10,000 words</td></tr></table>",
"html": null,
"text": "",
"num": null,
"type_str": "table"
}
}
}
}