ACL-OCL / Base_JSON /prefixC /json /C96 /C96-1011.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C96-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:52:13.570283Z"
},
"title": "Learning of a Rule-based Spanish Part of Speech Tagger",
"authors": [
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Hausman",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Sysl, clns l{.cs(;eu'<:t~ al~(l Al)l)li<:{d;ions (~orpora.1;ioll (SIIA) 4300 Vair l,akcs (]ottt% l<'a, it'fax, VA 22033 }Lo I1 cc((i]Sl'{i,.",
"pdf_parse": {
"paper_id": "C96-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "Sysl, clns l{.cs(;eu'<:t~ al~(l Al)l)li<:{d;ions (~orpora.1;ioll (SIIA) 4300 Vair l,akcs (]ottt% l<'a, it'fax, VA 22033 }Lo I1 cc((i]Sl'{i,.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We have develol)ed a. Spanish l'arb-(,l:-HI)('(~ch (I)()S) 'l'a.g:g(~r which al)l)lies and extends Ih'ill's alg(,rilJ~u for unSUl)crvised l('a.rniug (llrill, l.q!),5) to cr(~a.l;e a. set of rules (;hal, r(~(luce the aml)iguil.y of I'()S tags on words. We have ch()scu an unsupervised Ica','ning algori/,hn~ l)(~ca.u,s(~ il, does not require a. larg;(' I)()S-l;agged t,raining (-orl)us. Since there was n() I)()S-t.agged Spanish c(,rt)us availabh' 1;() us and since creating a large hand-l,;tgp;(xl corltus is both cosl, ly aud I)r()ne l,o inconsislamcy, Gc decision was also a l)ra, ci, ical one. Wc have decided 1;o develop a rule-based I [,; xggcr l}(!causc such a. tagger lea.rus a sel, of declarative rules m~d also because we wautt'd 1,o c(tml)are it, with Ili(Id(:n M arkov M odd (I 1M M)-/)ascd 1;aggers.",
"cite_spans": [
{
"start": 640,
"end": 643,
"text": "[,;",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "We extcude<l Ih'ill's algol'itlnu in scwwal ways. l\"irsl,, we cxtcnd(;d it, (,o Imn(Ih~ unknowu words in the training and test texl,s. Scc(m(l, w(., i) aram(~ terized learniug and t;ag,ghw; ol)tions. Finally, wc cxp(;rinmnl, ed wil;h a \"hyhrid\" solul, io,, where we tls (~d a. v(;ry sinful[ Iltlllll) (al)()ul, 45,000 cnl;rics). W(' used only the ()pen class entries from {;Iris l<~xi('on, and then augm(~nl,c(I il, with irr(~gular verb forms and ;t nullll)cr of closcd-(-iass words. ()ur nlorl)hological analyzer uses a sel; of rewrite rules to sl, ril> off all(l/or mo(l if','/ word endings I;o lind root forms uf words.",
"cite_spans": [
{
"start": 135,
"end": 151,
"text": "Scc(m(l, w(., i)",
"ref_id": null
},
{
"start": 270,
"end": 300,
"text": "(~d a. v(;ry sinful[ Iltlllll)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "(Jnknown Word Ilan(|ling Hittce l, hc lexicon and JUUrl)hological almlysis will not cov('r cv('ry single wor(l I, ha/, cmi apl)Car itl a, 1,exl,, ;-mal, t,::nlpl, is iwulc ;/l, l, his ,'-;l, ag:c I,{i classify ItllkltowII WOl'dS. Any word which did noL gel, ;tssigned one or more p;trl, s-o[Lslme.ch in l W(: h;~v(', obta,in(:d ~ license to the, dictionary. the lookup/morphology phase is examined for certain traits often indicative of particular parts-ofspeech. This task is similar to what was done by the guessers for the HMM-based French and German taggers (Chanod and Tapanainen, 1995; Feldweg, 1995) .",
"cite_spans": [
{
"start": 562,
"end": 591,
"text": "(Chanod and Tapanainen, 1995;",
"ref_id": null
},
{
"start": 592,
"end": 606,
"text": "Feldweg, 1995)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2.1.2",
"sec_num": null
},
{
"text": "For example, words ending in the letters \"mente\" are assigned the tag of ADV (adverb). Those words ending in \"ando\" or \"endo\" are assigned the tag V-CONTINUOUS-NIL (continuous form of the verb). Table 1 shows a list of unknown word handling rules. ",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "2.1.2",
"sec_num": null
},
{
"text": "V-CONTINUOUS-NIL V-PERFECT-NIL V-NIL-NIL V-NIL-NIL-CLITIC ~ N ADV ADJ PROPN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.1.2",
"sec_num": null
},
{
"text": "Performing these simple checks reduces the number of unknowns in our test set of 17,639 words from 737 (4.2%) to la8 (0.9%). The remaining unknowns are assigned a set of ambiguous open-class tags of N, V, ADJ, and ADV so that they can be disambiguated by the Learner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.1.2",
"sec_num": null
},
{
"text": "The Learner takes as input ambiguously tagged texts produced by the Initial State Annotator, and tries to learn a set of rules that will reduce the ambiguity of the tags. Output is a file of rules in the following form: The Learner applies Brill's algorithm for unsupervised learning to try to reduce the ambiguity of the tags in the input corpus. The following steps are taken:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learner",
"sec_num": "2.2"
},
{
"text": "context = C: P~] ... ]1~ I ...I P,~ --+ t},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learner",
"sec_num": "2.2"
},
{
"text": "1. The Learner examines each ambiguously tagged word and creates a set of contexts for the word. Two of these contexts will be PRE-VWORD and NEXTWORD. The remainder consist of PREVTAG and NEXTTAG contexts as required by the tag(s) on the preceding and following words. For example, if the word preceding the ambiguously tagged word is ambiguously tagged with two tags, then the Learner must generate two PREVrI'AG contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learner",
"sec_num": "2.2"
},
{
"text": "2. An attempt is made to tind unambiguously tagged words in the corpus that are tagged with one and only one of the tags on the ambiguously tagged word. For example, if the word in question has both N and V tags, then the Learner would search for words with only an N tag or only a V tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learner",
"sec_num": "2.2"
},
{
"text": "3. If such a word is found, the contexts of that word are examined to determine if there is an overlap between them and the contexts generated for the ambiguously tagged word. One issue for this determination is how nmch ambiguity should be tolerated in the context of the unambiguously tagged word. For example, if one of the possible contexts is PRE-VTA(I=N and the word preceding the un ambiguously tagged word has both N and V tags, should the context apply? To permit various approaches to be tried, we extended the Learner to accept a parameter (i.e., freedora) that determines how nmch ambiguity will be accepted on the context words for the context to match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learner",
"sec_num": "2.2"
},
{
"text": "4. If a context matches for this unambignously tagged word, the count of unambignously tagged words with the particular part of speech occurring in that context is incremented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learner",
"sec_num": "2.2"
},
{
"text": "5. After the entire corpus is examined, each of these possible reduction rules (of the form \"Change the tag of a word from X to Y in the context C where Y C X\") is ranked according to the following. First, for each tag Z C ;g, Z \u00a2 Y, the Learner computes: The tag Z that gives the highest score from this formula is saved as R. Then the score for a particular transforrnatiotr is (:) -* te, (:) 6. If the highest-ranked transfornlation is trot positive, the Learner is done. ()therwise, the highest-ranked transformation is appended to the list of transformations learned. The Learner then searches this list for the transforlnation that will result in the most reduction of ambiguity (whi,;h will always l)e the latest rule h:arned) and applies it;. This protess continues until no further reduction of ambiguity is possil)h;, lh;re, we also extended tire l,earner to +~ccept a different parameter (i.e., l-ta.qJ'reedom) that deterntines how tHlt(:h ambiguity will I)e accepted on a word that is used for ('onte+xt during ambiguity rcducliou, that is, when the l+earn(>r has tbund a ruh~ and is apl)lying it to the trMning text. Note that sl)ecifying too small a value for this t)arameter can cause the ].,e&r'lr(-:,i' to go irrto ,:tIl etrdless loop, as restricting the valid r:ollt(:xts Itray have the effect of nullifyiug the just-learned rule.",
"cite_spans": [
{
"start": 391,
"end": 394,
"text": "(:)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learner",
"sec_num": "2.2"
},
{
"text": "\u2022 incontcxt(Z, C),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learner",
"sec_num": "2.2"
},
{
"text": "7. The Learner thee returns to step 1 to begin the process again.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learner",
"sec_num": "2.2"
},
{
"text": "This c+otnl)otmnt reads tagged texts l)ro(h,:ed by the lnitiM State Atmotator and rules produced by the Learner and applies the learned rules to tim ta,gged texts to reduce the aml)iguity of the tags. We extended the+ l{,ule Tagger to haw~ two i)ossible modes of operation (i.e., best-rule-first and learued-sequcuce mo(les controlled by t, hc scq parameter) for using the, learned rules to reduce ambiguity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R,ule Tagg('+:t'",
"sec_num": "2.3"
},
{
"text": "I. The Rule Tagger can use an algorithm similar to that used in step 7 of the l,earner. Each possible reduction rule is examined against the text to determine whid~ ruh', results in the greatest reduction of aml)iguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R,ule Tagg('+:t'",
"sec_num": "2.3"
},
{
"text": "2. The R, ule Tagger can use a sequeutial application of the learned rules in the order tha.t the rules were learned. After each rule has been applied in sequence, all of the rules preceding it are re-applied to take adwml.age of ambiguity reductions made by the latest rule apl)lied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R,ule Tagg('+:t'",
"sec_num": "2.3"
},
{
"text": "The R, ule Tagger allows one to specify, as in the /,earner, how much ambiguity will be, tolerated for a context to match. For example, one can be very restrictive and require that a tag context (e.g., PREVTAG=N) thatch only an unambiguously tagged word (in this ease, a word with only an N tag). This parameter (i.e,., r-lagJi'ccdom)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R,ule Tagg('+:t'",
"sec_num": "2.3"
},
{
"text": "Sl)eeifies the maximunl ambiguity Mlowed on a context word for a (:ontext tag to llrateh: I requires that the context word be unatnbiguously l,agg('.(l, 2 requires that tlmrc be no more than two tags on the word, and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R,ule Tagg('+:t'",
"sec_num": "2.3"
},
{
"text": "I\"or training and testing of the tagger, we have randomly l)icked articles from a large (274MB) \"H Norte\" Mexican newspaper corl)uS, and sel)arated tlwm into the training and test s(+ts. The test set; (17,639 words) was t, ngged matmally for comparison agaittst the system-tagged texts. For training, wc partitioned the de, velopment set into sev(:ral dilt'erent-sized sets in order to st(: the elfeels of training corpus sizes. The 1)reakdown can I)e Found in Table 2 . If one randomly picks one of the possible tags (+t+ each word in the test set, the accuracy is 78+0% (78.0% with the simple verh tag set). The awwage I'()S amhiguity per word is 1.52 (1.49) including t)unctuation tags arr(I 1.58 (1.56) excluding l)Unctuat, ion tags. For co,nparison, the accuracy of lh'ill's unsupervised English tagger was 95.1% using 120,000-word Penn Treel)ank texts. Ills initial state tagging accuracy was 90,7%, whictl is considerably higher than our Sl)a, ish case (78.6%:).",
"cite_spans": [],
"ref_spans": [
{
"start": 461,
"end": 468,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": null
},
{
"text": "Our tirst set of experiments tests the etDct of the I'()S tag eomt)lexity. We used both the Siml)le verl) tag set (5 tags) and the c, otnplex verb tag set (42 tags), which is shown in \"l'~l)le 3, where * can be either IS(l, 2S(~, 3S(;, IPL, 2PI+, or 3PI+. In tim case of siml)le verb tag set, tense, person and numl)er information is discarded, leaving only a \"V\" tag and the lower four tags in the table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Eth;ct of Tag Set",
"sec_num": "3.1"
},
{
"text": "The scores witlr the siml)le verb tag set fur different sizes of training sets are found in Tabh~ 4, and those with the complex verb tag set in 'l'a ble 5. For these two experiments, (,he Learner was set to have a tight restriction on using context for learning (i.c, the freedom parameter was set to 1) and a loose restriction on context tbr applying the learned rules (i.e., l-lagfrecdom 10). q'he l{,ule Tagger was given a moderately-tight restrictiotb on using context for reduction rule application (i.e., r-lagJ'rccdom 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Eth;ct of Tag Set",
"sec_num": "3.1"
},
{
"text": "In goner'M, the scores are slightly higher using the siml)le verb t~g set over the complex verb This rule was learned h~te in tile learning process when most I'/SU1KJONJ pairs had already been reduced, llowever, as olle Call see frolll t]le COiltext of the rule, it will apply in a large number of eases in a text. The Rule Tagger notes this and applies the rule early, thus incorrectly changing many P/SUI~C()NJ pairs to SUBC()NJ and reducing the accuracy of t, he tagging. Since this phenomenon never occurred in any of the other learning rims, one can see that the learning pro eess can be heavily influenced by the choice of it, put texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Eth;ct of Tag Set",
"sec_num": "3.1"
},
{
"text": "V-(~()N I)ITIONA L-* V-FUTUI{E-* V-IM PERFECT-* V-IM P EI~F ECT-S U 13.1 U NCTIV E-f{A-* V-IM PER, FECT-SUI]J UNCTIVE-S E-* V-PRESENT-* V-P RES ENT-S U BJ UN( JTIV E-* V-P R ET E RI T-* V-NIL-NIl, V-C()NTINU()US-NII~ V-I'EI{,FECT-NIL V-NIL-N1L-(;LITIC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Eth;ct of Tag Set",
"sec_num": "3.1"
},
{
"text": "The next tests performed involved using rules generated above and changing 1)arameters to the Rule 'l'agger to see how the scores wouhl be influenced. In the following test, we used tile simi)le verb tag set rules but varied the r-tagfrccdom parameter and the scq parameter. The results can be found in Table 6 . tag set (91.8% vs. 90.3% for the \"Medimn\" corpus). This behavior is most likely due to the fact that, some verb tense/person/number combinations e~mnot easily be distinguished from context, so the Learner was unable to find a rtfle that would disambiguate them. As can be seen from the tables, performance increased as the size of the learning set incre, ased up to the \"Medium\" set, where the score levelled otf. With very small learning sets, the system was unable to tlnd sulticient examples of phenomena to produce reduction rules with good coverage.",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 310,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Effeet of Rule Application Parameters",
"sec_num": "3.2"
},
{
"text": "One surprising data t)oint in the simple verb tag set experiments was the \"Full\" score, which dropped Mmost 9% fi'om the \"Medium\" score. After analyzing the results more closely, it was found that the l,earner had learned a very spec, i[ie rule regarding tile reduction of prel)osition/subordinate~-conjunction eombinations late in the learning process. Although the wu'iations are slight, the best value for the r'-lagfl'c, edom l)arameter seems to be at an ambiguity level of 2. It seellls that the strategy of reducing the ambiguity as quickly as possible (best-rule-first) is better than following the ordering of the rules by the l,earner. This [nay well be due to the fact that the ordering of the rules as produced by the Learner is dependent on the training texts. Since the test set was a differeat set of texts, the ordering of the rules was not as applicable to them as to the training texts, and so the tagging performance suffered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effeet of Rule Application Parameters",
"sec_num": "3.2"
},
{
"text": "Etfe, rt (ff Hand+tagged Tex(;s Afl\u00b1er ex+ttnining l\u00b1h(; result;s fi'om l\u00b1he aJ)(~v(~ expcr imcnl\u00b1s, wc rea,lized l\u00b1hal, sonm of (,he (:h~scd-cl;uss words in Spanish ;~re a, ltnosl, always amhiguous (e.g., preposil\u00b1ions are usually ~unl~ig;uous bel,we(m 1 )I{EP a, nd S U B(:()NJ, a, nd del\u00b1errnine, rs bel;we('n I) 1'3'1' a, ud 1' R()). This m('aus (;hal, l;h(~ l,ea, rncr will ?~,ever [ea, rn a rule I\u00b1o dismul)igu;tl,e l, hcs(~ clos(:dclass (:~+ses I)e(:+mse l, here will r;u'ely he ulmml)i,gttotis C()ll(;c:xl\u00b1s ill I, he l\u00b1raining I,ex (, s (, agge([ hy 1; 11(' ini(; iaJ Si; al ,e A lttlO(,&(;or. 'l'ha(, The 1)reakdown is in Table T ,a, rn(;([ rul(;s (la.qJ}'ecdom, 10) . 'l'h(> I{,ulc Tagger wa, s giw~n a itlodera,l,ely-t, ighl\u00b1 rest, riot,ion on using (:OIl[,(;Xl\u00b1 ['or rt':(lll('l ULgg;cd texts into tim \"Full\" aud~iguo,.tsly 1,a~g,.'d set would improw~ il,s r,M;h.er low score (or. 'l'ahie 4). Wc performed an experilJtcnl, using sitnplc w~rb tags, the \"l,'ull\" ambiguously tagged text;s, ~md the \"Full\" ha, nd-t;agged l\u00b1exts. Tim resu[l\u00b1s were d22 rules learned with :-~ score of 92.1%, which tied with (,he \"Sm'MI\" ambiguously l,agged set, for achieving l,he highest, .,tccura.('y of all o[\" the lem',i,g/ta,ggine; runs, a~ full 13.5% higher than using ,o I,.~;-u'nittg.",
"cite_spans": [
{
"start": 541,
"end": 543,
"text": "(,",
"ref_id": null
},
{
"start": 544,
"end": 548,
"text": "s (,",
"ref_id": null
},
{
"start": 549,
"end": 561,
"text": "agge([ hy 1;",
"ref_id": null
},
{
"start": 562,
"end": 572,
"text": "11(' ini(;",
"ref_id": null
},
{
"start": 573,
"end": 580,
"text": "iaJ Si;",
"ref_id": null
},
{
"start": 581,
"end": 583,
"text": "al",
"ref_id": null
},
{
"start": 640,
"end": 676,
"text": ",a, rn(;([ rul(;s (la.qJ}'ecdom, 10)",
"ref_id": null
},
{
"start": 762,
"end": 791,
"text": "(:OIl[,(;Xl\u00b1 ['or rt':(lll('l",
"ref_id": null
}
],
"ref_spans": [
{
"start": 584,
"end": 610,
"text": ",e A lttlO(,&(;or. 'l'ha(,",
"ref_id": null
},
{
"start": 632,
"end": 639,
"text": "Table T",
"ref_id": null
}
],
"eq_spans": [],
"section": "3.3",
"sec_num": null
},
{
"text": "Problems and Possible hnprovements All;hough our Sl);mish P()S l;aggcr l)er['orn,ed rca son;dfly well, ~whieving ~Llt it,q)rovcment of 13.,5% ill ;tc(:tlra, cy over r&llC[()ttlly picking tags, l\u00b1hcre wcro sewwaJ lwol)lcms t, lt~ui; ln'evenl, e,,l the sysl\u00b1cln I'ronll re;whiug an cwm higher score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "As discussed iu Sccl\u00b1ion 3.3, ~u,,l)iguous closed class words (e.g., prep()sil, ions, det,crminers, etc.) ca,nnol; bc reduced when l, here a, re no unaatll~igu ous exa.nlples o[' l\u00b1heull in l, he l,i'n, iui|lg; t,exl, s. This is prev;flent, in Slmnish, where most I)reposil\u00b1ions the lexicon does not list all the possible tags for a word, the tagger is very likely to make a mistake. This is because the learner is trained to reduce the ambiguity of possible tags of a word (say N, V, ADJ tags), but if the lexicon lists only a subset of the possible tags (say N and V tags), the system will never learn to assign an ADJ tag even when the word is used as an adjective. This type of problem was observed frequently when words are ambiguous between proper nouns and some other parts-of-speech such as \"Flo-,'es (ADJ/PROPN),\" \"Lozano (ADJ/PROPN),\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Proi~h',ln",
"sec_num": "4.11."
},
{
"text": "\"van (V/PP~OPN) ''a, \"Serra (V/l'i{,OPN),\" etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Proi~h',ln",
"sec_num": "4.11."
},
{
"text": "because not all the proper nouns are in the lexi-COIl.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Proi~h',ln",
"sec_num": "4.11."
},
{
"text": "The problems described above did not occur in Brill's experiments because he derived the lexicon fi'om a POS-tagged corpus and used the untagged version of the same corpus for training and testing. Thus, he used an \"optimal\" lexicon which contains all the words with only parts-of-speech which appeared in the corpus. In addition, in such a corpus, rarely used POS tags of a word are less likely to occur, and words are less likely to be ambiguous. Thus, in a sense, his \"unsupervised learning\" experiments did take advantage of a large POS-tagged corpns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Proi~h',ln",
"sec_num": "4.11."
},
{
"text": "It is very ditIicult to compare performances between taggers when accuracy depends on quality of corpora and lexicons, and maybe on characteris,its of languages. But in this section, we cornpare our tagger with Hidden Markov Model-based taggers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "5"
},
{
"text": "A more widely used algorithnl for unsupervised learning of a POS tagger is Hidden Markov Model (I1MM). Cutting el al. ((hitting et al., 1992) and Melialdo (Merialdo, 1994) used IIMM to learn English POS taggers while Chanod and 'I'apanainen (Chanod and Tapanainen, 1995) , Feldweg (Feldweg, 1995) , and Ledn and Serrano (l,e6n and Serrano, 1995) ported tile Xerox tagger (Cutting et al., 1992) to French, German, and Spanish respectively. One of tile drawbacks of an tlMM-based approach is that laborious manual tuning of symbol and transition biases is nec: essary to achieve high accuracy. Without tuned biases, the C, erman Xerox tagger achieved 85.89% while the French Xerox tagger achieved 87% accuracy. After one man-month of tuning biases, the accuracy of the French tagger increased to 96.8%. One could derive such biases fronl a corpus, as discussed in (Merialdo, 199d) , but it unfortunately requires a tagged cort/us. 'Fhe best accuracy of the Spanish Xerox tag: ger was 91.51% for the reduced tag set (174 tags) lit can be a part of a last name as it, \"van Mahler\", but also is an inflected form of \"it\". with the hase accuracy (i.e. no training) of 88.98% while the best accuracy of our tagger is currently 92.1% for the simple tag set (39 tags) with the base accuracy of 78.6%. The lower base accuracy in our exl>eriment is probably due to the large number of entries in the Collins dictionary.",
"cite_spans": [
{
"start": 118,
"end": 141,
"text": "((hitting et al., 1992)",
"ref_id": null
},
{
"start": 155,
"end": 171,
"text": "(Merialdo, 1994)",
"ref_id": "BIBREF10"
},
{
"start": 241,
"end": 270,
"text": "(Chanod and Tapanainen, 1995)",
"ref_id": null
},
{
"start": 273,
"end": 296,
"text": "Feldweg (Feldweg, 1995)",
"ref_id": "BIBREF8"
},
{
"start": 320,
"end": 345,
"text": "(l,e6n and Serrano, 1995)",
"ref_id": null
},
{
"start": 371,
"end": 393,
"text": "(Cutting et al., 1992)",
"ref_id": "BIBREF6"
},
{
"start": 862,
"end": 872,
"text": "(Merialdo,",
"ref_id": null
},
{
"start": 873,
"end": 878,
"text": "199d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "5"
},
{
"text": "Our Spanish Part of Speech Tagger is a successful implementation and extension of Brill's unsuper: vised learning algorithm that reduces the ambiguity of part-of-speech tags on words in Spanish texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6"
},
{
"text": "The system requires few, if any, hand-tagged texts to bootstrap itself. Rather, it merely requires a Spanish lexicon and morphological analyzer that can tag words with all their possible parts-of-speech. (liven that the system performs at approximately 92% accuracy even with the aforementioned problems and with the inch> sion of unknown words, we would expect that this system could achieve better results, approaching those of similar English-language POS taggers, when these problems are rectitied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "ew h~tll(I (,;Lgged l,exl\u00b1s ;~rc required l,o Ic,rn goud rules for reducing l,he ;uld~iguil\u00b1y un I hesc words. It ix l>ossihle, t,~w-(wcr, t, hM such l\u00b1exts c.%tl I)e dis:-md)igutd, ed only for t, heir :-~lways ambiguous ch)scd-ck-tss words bul\u00b1 llol, tlllaJlli)igtlOtlS clos('(l-cla,'-.;s words or o])0,11-class words. Such an cxperim(ml; similar 1,o seleclivo ,samplin.q (.[isctlsscd in l)agan and lengels(m (1)ag;ul ~-md l\",l+gelscm, 1+)/)5) wo.hl I',e useful in the",
"authors": [
{
"first": "/",
"middle": [],
"last": "{h'dill;T[,O (;Oii J",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liici",
"suffix": ""
}
],
"year": null,
"venue": "orlllillOl'S c;u| be prol|otlllS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C~ll| &]SO ~)0 :-;tl]){H'dill;t[,o (;oII j/LIICI, ioIIS~ dc(,orlllillOl'S c;u| be prol|otlllS, (':l;c. A ['ew h~tll(I (,;Lgged l,exl\u00b1s ;~rc required l,o Ic,,rn goud rules for reducing l,he ;uld~iguil\u00b1y un I hesc words. It ix l>ossihle, t,~w- (wcr, t, hM such l\u00b1exts c.%tl I)e dis:-md)igutd, ed only for t, heir :-~lways ambiguous ch)scd-ck-tss words bul\u00b1 llol, tlllaJlli)igtlOtlS clos('(l-cla,'-.;s words or o])0,11-class words. Such an cxperim(ml; similar 1,o seleclivo ,samplin.q (.[isctlsscd in l)agan and lengels(m (1)a- g;ul ~-md l\",l+gelscm, 1+)/)5) wo.hl I',e useful in the ['ul, tll'(: [)c:c;~llse, il' it, is t, ruc, it; will reduce t, hc cost; or tll:-I,tlll;-t] l,~+gging (-onsidcr+flfly.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Lexicon Prol)le, nJ",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lexicon Prol)le, nJ.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "he incotnl)l(~l\u00b1en(~ss ~tl~(I t[tisl, a, kes in t, he lexic(m. While I,h(+ lexicotl, derived l'r()lll t h(~ (',ollins Spa,nish-lgnglish dict, iot]&ry, w~s quit,(' rich in w(~r(ls, il\u00b1s l;:-tg set,.lid uol\u00b1 a, lwa+ys tmd, ch l;he t, ag dcliuit,ions we ('ml)loyc(t. l,'or (~xampl(~, our l,ag sol",
"authors": [],
"year": null,
"venue": "isl, inguislms pr(>l)(:r n()uus (I)I{.()I>N) &lid It()/lllS (N), whereas the (~ollins di(:l\u00b1ionvxy t~ark(+d h(>l;h as nout+s (N)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I)rohlems l;ha,t; ]m('anlc a, ppar(ml\u00b1 .a,s we ra, lt lll()r(: t,csl\u00b1s were (,he incotnl)l(~l\u00b1en(~ss ~tl~(I t[tisl, a, kes in t, he lexic(m. While I,h(+ lexicotl, derived l'r()lll t h(~ (',ollins Spa,nish-lgnglish dict, iot]&ry, w~s quit,(' rich in w(~r(ls, il\u00b1s l;:-tg set, ,.lid uol\u00b1 a, lwa+ys tmd, ch l;he t, ag dcliuit,ions we ('ml)loyc(t. l,'or (~xampl(~, our l,ag sol, (]isl, inguislms pr(>l)(:r n()uus (I)I{.()I>N) &lid It()/lllS (N), whereas the (~ollins di(:l\u00b1ionvxy t~ark(+d h(>l;h as nout+s (N). We have a(lded ottr existing 1)t't)l)tw ha.rim lisl\u00b1s 1,o t, he lexicon t,o t>+u'l;bdly solve /,his i)rol+lem, Iml, the list, s +u'e currenl;ly limil,cd I,o h> (:~l,l\u00b1i(lli II;",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "We a, lso l~'Otlltd s(w(q'0,1 mistakes ht late (]olli,s dclinil,i(,ns (e.g., severed adverbs ending",
"authors": [],
"year": null,
"venue": "_I, llI(~S (tll(i lmol)hCs lh'sl; n,~tln(>s",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "_I, llI(~S (tll(i lmol)hCs lh'sl; n,~tln(>s. We a, lso l~'Otlltd s(w(q'0,1 mistakes ht late (]olli,s dclinil,i(,ns (e.g., severed adverbs ending \" ltt(~llt,( \"?, wcr(\" classified a(Ij(~cl;ives).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "ug]~ we fixed l\u00b1hese mist, akcs as we t~ol, iccd (hcltl, it, ix diflicult, 1,O kllOW h(lw nl,~uly sucll (Wl'ors sl,ill remain in tim [(~xic:()n. It l\u00b1urtt(xt out; I;hal, the h+complcle, nc,s.s o1' the lexic.n was auol, h(w funda+m(ml,a,l l)rol)h~::~ I\u00b1o I~rill's unsul>erviscd h'~-u'ning algoril:Inm ThaJ, is, when References Eric Brill. 1995. /Jnupervised learning of disambiguation rules for part of speech tagging",
"authors": [],
"year": null,
"venue": "All",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "All, lt()ug]~ we fixed l\u00b1hese mist, akcs as we t~ol, iccd (hcltl, it, ix diflicult, 1,O kllOW h(lw nl,~uly sucll (Wl'ors sl,ill remain in tim [(~xic:()n. It l\u00b1urtt(xt out; I;hal, the h+complcle, nc,s.s o1' the lex- ic.n was auol, h(w funda+m(ml,a,l l)rol)h~::~ I\u00b1o I~rill's unsul>erviscd h'~-u'ning algoril:Inm ThaJ, is, when References Eric Brill. 1995. /Jnupervised learning of disam- biguation rules for part of speech tagging, hi Proceedings of the 3rd Workshop on Very Large Corpora.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Tagging French -(~omparing statistical and a constraint-based method",
"authors": [
{
"first": "Pasi",
"middle": [],
"last": "Jean-L)ierre Chanod",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tal",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the I,/A CL -95",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-l)ierre Chanod and Pasi Tal)anainen. 1995. Tagging French -(~omparing statistical and a constraint-based method. In Proceedings of the I,/A CL -95.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Practical Part-of-Speech Tagger",
"authors": [
{
"first": "D",
"middle": [],
"last": "Cutting",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kupiec",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Sibun",
"suffix": ""
}
],
"year": 1992,
"venue": "hi Proceedings of the 7'hird Conference ou Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Cutting, J. Kupiec, J. Pedersen, and P. Sibun. 1992. A Practical Part-of-Speech Tagger. hi Proceedings of the 7'hird Conference ou Applied Natural Language Processing.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Selective Sampling in Natural I,anguage Learning",
"authors": [
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "I",
"middle": [
")"
],
"last": "Scan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Engelson",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the IJCAI Workshop on Nero Approach to Lcarning for\" Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan and Scan I ). Engelson. 1995. Selective Sampling in Natural I,anguage Learning. In Proceedings of the IJCAI Workshop on Nero Ap- proach to Lcarning for\" Natural Language Pro- cessing.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Implementation and Evaluation of a. German ll M M for POS Disambigualion",
"authors": [
{
"first": "",
"middle": [],
"last": "Llelnlut Feldweg",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of lhe Is\"ACL ,91C1)A7' Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "llelnlut Feldweg. 1995. Implementation and Eval- uation of a. German ll M M for POS Disambigua- lion. In Proceedings of lhe Is\"ACL ,91C1)A7' Workshop.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Serran(). 1995. l)evelot>ment of a spanish version of the xerox tagger. Ill l'roceedings of the XI Congrcso de la ,5'ocicdad I,'spar~ola para el Proce= samiento dcl Lenguaje",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "S{mchez",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Amalio",
"middle": [
"F"
],
"last": "Nieto",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando S{mchez I,edn and Amalio F. Nieto Ser- ran(). 1995. l)evelot>ment of a spanish version of the xerox tagger. Ill l'roceedings of the XI Congrcso de la ,5'ocicdad I,,'spar~ola para el Proce= samiento dcl Lenguaje. Nalural (,gEl'LN '95).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Tagging English Text with a Probabilistic Model",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Merialdo",
"suffix": ""
}
],
"year": 1994,
"venue": "Compnialional Linguislics",
"volume": "20",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Merialdo. 1994. Tagging English Text with a Probabilistic Model. Compnialional Lin- guislics, 20(2).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "cr o[' hm~(I-(lis:nnhigual,(~d texi;s during training to overcom(~ a tiu~(lan~(ml, a.l limitation in tit(: learning algorithm.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Spa,ish l'()N l,agg(w consisLs of l, hre(~ (:()lHl)(~ nenl;s: the lnil, ial St, al,e Anuol, a l.or, the l,carncr, ;Ul(l l, he l{.uh\" 'l';tggcr, c;tch o[' which is d(~scrih~'(l helow. onent is used t()assign all I>ossil)le I>()S l.ag~ 1.o a. given Spanish word. It consisl;s of h~xicon h>()kul) , nl(>rphol()gical analysis, a.nd ult. k.(.wn word ha, ndlin~,,. 'l'he Spanish I'()S l,ag s.~l, us(~d in 1,his w()rk c(msisl,s ()f l,hc l'olh,wi.v; l;~gs: AI)J. AI)V, BI+, (form of scr ()r ~s/ar), (II,()(~1( 'I'IME, (;()I,()N, (;()MMA, (;ON.l, I)ATE, I)1,'/1', HAVE (form uf ha&', 9 , IIYI'IIEN, LET'I'I,~I{,, I,I'AREN, M()I)EL, MUI,TII'IAEH,, N, NUMBI{,, 1 ), I)EI{,I()I), I)I{,EI~'JX, PI{,(), J)I{,()PN, QIJEH-. MAll, I(, QIJ()TE, IlL)MAN, I{.I'AI{EN, SEMI-(;()l,()N, SI,ASll, SIJIRX)N.I, S//FVIX, TIIEILE (hater used iu \"l,h(~re\" collsl, ructi()ns), WII[)ET (,:,.*l/il,,',,), WIJN J' (q,t,r), Wt[l'l' ( da,~d~O, ;.,d V (S(> 'l'ahl(' ;/). 2.].1 L(;xic(m Lookup and Morl)h()logi(-al Analysis Iinlil{e Ib'ill's English l;aggcr cxl)(wint(mt; (le scribed in (l+rill, 1!)!),~), ,~o large I)()H-1;ag;gc(I Spanish c(,rpus was a.vailable Lo us Dotal which ;t htrge h~xicon cau be (]eriv(~(l. As a resull,, we (leci(h~d 1;o pars(\" l,h('. (m-line ( kdlins Span ish--I!',uglish l)icl,ionary t, ~tlt([ d(',riv(',d a, large h',xicon from it;",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "llere are some examples taken from the actual learned rules: * NEXTWORD = DE : PIN --~ N . PREVWOI{D = EN : DETIADV --+ DET * PREVTAG = DET : VIN ---+ N * NEXTTAG = SUBCONJ : BEIV --+ V 2PREVWORD = previous word, PREVTAG = previous tag.",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "where freq(Y)= number of occurrences of words unambiguously tagged with Y, freq(Z)= number of occurrences of words unambiguously tagged with Z, inconlext(Z, C)= nmnber of times a word unambiguously tagged with Z occurs in context (I.",
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"uris": null,
"text": "is, un]i],m ()[)('II-(:[&SS w(,'ds, wc will no(, littd .cw ltJta,ntl)iguotts ch,s('d class words i, l\u00b1exl\u00b1s prccis(~ly I)(:(:;mse there is oil[y a closed set; of t;hcm+ 'l'hus, wc decided I\u00b1o illl, ro-(bite a, st~la[] tltll]t])(:r of' ]la, lt([-(,~Lgg(x] l;exl;s illl,o (,lie 1;l:a, ining sel; given (;o the l,earner. Since t, he }l;m([ t~tgg(;([ 1;exI;s [l&Ve ~\u00a2corI'(~C(;\" (~X&III[)I0S ()[+ V,~LI'IOIlS l)h(:notn(',ua,, I, he l.eartmr s]toul(l ])e a])[e (,\u00a2) lin(I good exa, nq)les in t;h(,~+ I\u00b1(, learn l'ro~+.For our t,esl\u00b1s, wc (h~litmd four set,s o[\" hat.lt, agged texts t,h;U, wc a, dded t+()the \"Sttmll\" (306(~ wor([s) set, o[' at~tbigu()us[y l,aggcd l,exl,s.",
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"uris": null,
"text": ",iotl rule a,i)l+lit:al, i()t~ (J:r'rcdom 2). The bcst-rul(',-Jir,sl mode of I, he l{mle Tagg(:r was Ilscd, The resull,~, ~s shown iu Table 8, a+rc sligi~l, ly belA,('r l, han wh(;n using only ;m~l)igttously Lagged l\u00b1eXl,S, It is inl;eresl, it~g I\u00b1o note tl];d, l,]m higher ~-tc(:tu'a, cy w:-ts achieved wit, h fewer ruh'.s. Itl fact, ;d[ expe, rimcnl,s resull;ed iu [ea, rnhtg a lil,l\u00b1h' (~ver 200 tithes. 'I'M)h; 8: Atnl)iguous/l]t~;tnd,igu.us 'l'cxts, Sit~tple Verbs lu +ul(lil\u00b1i()t~ t,o Lira (+\u00d7l)('rhu(mts ;Ll)ov(>, wc wa, l|l\u00b1(xI l,o knuw i[\" (,he itfl;r()(lucl, ion o[' ha, rid-",
"type_str": "figure"
},
"TABREF0": {
"num": null,
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td colspan=\"2\">: Unknown Word Handling Rules</td></tr><tr><td>Iteuristics</td><td>POS tag</td></tr><tr><td>num &gt; 1600 &amp; &lt; 2100</td><td>DATE</td></tr><tr><td>roman numeral 1-9</td><td>ROMAN</td></tr><tr><td>-and,-endo</td><td/></tr><tr><td>-ido,-ado,-ida,-ada</td><td/></tr><tr><td>-er,-ir,-ar</td><td/></tr><tr><td>-erse,-irse,-arse</td><td/></tr><tr><td>-cidn,-idad,-izaje</td><td/></tr><tr><td>-mente</td><td/></tr><tr><td>-able</td><td/></tr><tr><td>capitalized</td><td/></tr></table>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"html": null,
"text": "(Jomplex Verb Tag Set",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td colspan=\"3\">Ambiguously tagged texts, Sirnple Verbs</td></tr><tr><td>Set</td><td># of rules learned</td><td>~core</td></tr><tr><td>Tiny</td><td>131</td><td>82.5%</td></tr><tr><td>Small</td><td>211</td><td>91.5%</td></tr><tr><td>Medium</td><td>287</td><td>91.8%</td></tr><tr><td>Full</td><td>434</td><td>83.0%</td></tr><tr><td>(none)</td><td>0</td><td>78.6%</td></tr></table>"
},
"TABREF5": {
"num": null,
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td colspan=\"4\">: Ambiguously tagged texts, Simple Verbs</td></tr><tr><td>Set</td><td>R,-Tag-</td><td>Score</td><td>~eore</td></tr><tr><td/><td>freedom</td><td>(best-rule-</td><td>(learned-</td></tr><tr><td/><td/><td>first)</td><td>seqtlellce)</td></tr><tr><td>Tiny</td><td>1</td><td>82.7%</td><td>80.2%</td></tr><tr><td/><td>2</td><td>82.5%</td><td>80.6%</td></tr><tr><td/><td>3</td><td>82.1%</td><td>80.5%</td></tr><tr><td/><td>4</td><td>81.9%</td><td>80.5%</td></tr><tr><td>Small</td><td>1</td><td>90. l%</td><td>89.8%</td></tr><tr><td/><td>2</td><td>91.5%</td><td>89.9%</td></tr><tr><td/><td>3</td><td>91.5%</td><td>89.9%</td></tr><tr><td/><td>4</td><td>91.5%</td><td>89.9%</td></tr><tr><td>Medium</td><td>1</td><td>90.5%</td><td>90.6%</td></tr><tr><td/><td>2</td><td>91.8%</td><td>90.5%</td></tr><tr><td/><td>3</td><td>91.8%</td><td>90.5%</td></tr><tr><td/><td>4</td><td>91.8%</td><td>90.5%</td></tr><tr><td>Full</td><td>1</td><td>82.4%</td><td>79.8%</td></tr><tr><td/><td>2</td><td>83.0%</td><td>80.0%</td></tr><tr><td/><td>3</td><td>81.7%</td><td>80.0%</td></tr><tr><td/><td>4</td><td>81.5%</td><td>8O.0%</td></tr></table>"
}
}
}
}