| { |
| "paper_id": "W96-0207", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T04:59:17.900382Z" |
| }, |
| "title": "Combining Hand-crafted Rules and Unsupervised Learning in Constraint-based Morphological Disambiguation", |
| "authors": [ |
| { |
| "first": "Kemal", |
| "middle": [], |
| "last": "Oflazer", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Bilkent University", |
| "location": { |
| "postCode": "TR-06533", |
| "settlement": "Bilkent, Ankara", |
| "country": "TURKEY" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Gskhan", |
| "middle": [], |
| "last": "Tfir", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Bilkent University", |
| "location": { |
| "postCode": "TR-06533", |
| "settlement": "Bilkent, Ankara", |
| "country": "TURKEY" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "", |
| "pdf_parse": { |
| "paper_id": "W96-0207", |
| "_pdf_hash": "", |
| "abstract": [], |
| "body_text": [ |
| { |
| "text": "This paper presents a constraint-based morphological disambiguation approach that is applicable languages with complex morphology-specifically agglutinative languages with productive inflectional and derivational morphological phenomena. In certain respects, our approach has been motivated by Brill's recent work (Brill, 1995b) , but with the observation that his transformational approach is not directly applicable to languages like Turkish. Our system combines corpus independent handcrafted constraint rules, constraint rules that are learned via unsupervised learning from a training corpus, and additional statistical information from the corpus to be morphologically disambiguated. The hand-crafted rules are linguistically motivated and tuned to improve precision without sacrificing recall. The unsupervised learning process produces two sets of rules: (i) choose rules which choose morphological parses of a lexical item satisfying constraint effectively discarding other parses, and (ii) delete rules, which delete parses satisfying a constraint. Our approach also uses a novel approach to unknown word processing by employing a secondary morphological processor which recovers any relevant inflectional and derivational information from a lexieal item whose root is unknown. With this approach, well below 1% of the tokens remains as unknown in the texts we have experimented with. Our results indicate that by combining these hand-crafted, statistical and learned information sources, we can attain a recall of 96 to 97% with a corresponding precision of 93 to 94%, and ambiguity of 1.02 to 1.03 parses per token.", |
| "cite_spans": [ |
| { |
| "start": 314, |
| "end": 328, |
| "text": "(Brill, 1995b)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Automatic morphological disambiguation is a very crucial component in higher level analysis of natural language text corpora. Morphological disambiguation facilitates parsing, essentially by performing a certain amount of ambiguity resolution using relatively cheaper methods (e.g., Gfing6rdii and Oflazer (1995) ). There has been a large number of studies in tagging and morphological disambiguation using various techniques. Part-of-speech tagging systems have used either a statistical approach where a large corpora has been used to train a probabilistic model which then has been used to tag new text, assigning the most likely tag for a given word in a given context (e.g., Church (1988) , Cutting et al. (1992) , DeRose (1988) ). Another approach is the rule-based or constraint-based approach, recently most prominently exemplified by the Constraint Grammar work (Karlsson et al., 1995; Voutilainen, 1995b; Voutilainen et al., 1992; Voutilainen and Tapanainen, 1993) , where a large number of hand-crafted linguistic constraints are used to eliminate impossible tags or morphological parses for a given word in a given context. Brill (1992; has presented a transformation-based learning approach, which induces rules from tagged corpora. Recently he has extended this work so that learning can proceed in an unsupervised manner using an untagged corpus (Brill, 1995b) . Levinger et al. (1995) have recently reported on an approach that learns morpholexical probabilities from untagged corpus and have the used the resulting information in morphological disambiguation in Hebrew.", |
| "cite_spans": [ |
| { |
| "start": 283, |
| "end": 312, |
| "text": "Gfing6rdii and Oflazer (1995)", |
| "ref_id": null |
| }, |
| { |
| "start": 680, |
| "end": 693, |
| "text": "Church (1988)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 696, |
| "end": 717, |
| "text": "Cutting et al. (1992)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 720, |
| "end": 733, |
| "text": "DeRose (1988)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 871, |
| "end": 894, |
| "text": "(Karlsson et al., 1995;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 895, |
| "end": 914, |
| "text": "Voutilainen, 1995b;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 915, |
| "end": 940, |
| "text": "Voutilainen et al., 1992;", |
| "ref_id": null |
| }, |
| { |
| "start": 941, |
| "end": 974, |
| "text": "Voutilainen and Tapanainen, 1993)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 1136, |
| "end": 1148, |
| "text": "Brill (1992;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1361, |
| "end": 1375, |
| "text": "(Brill, 1995b)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1378, |
| "end": 1400, |
| "text": "Levinger et al. (1995)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In contrast to languages like English, for which there is a very small number of possible word forms with a given root word, and a small number of tags associated with a given lexical form, languages like Turkish or Finnish with very productive agglutinative morphology where it is possible to produce thousands of forms (or even millions (Hankamer, 1989) ) for a given root word, pose a challenging problem for morphological disambiguation. In English, for example, a word such as make or set can be verb or a noun. In Turkish, even though there are ambiguities of such sort, the agglutinative nature of the language usually helps resolution of such ambiguities due to restrictions on lnorphotactics. On the other hand, this very nature introduces another kind of ambiguity, where a lexical form can be morphologically interpreted in many ways, some with totally unrelated roots and morphological features, as will be exemplified in the next section.", |
| "cite_spans": [ |
| { |
| "start": 339, |
| "end": 355, |
| "text": "(Hankamer, 1989)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Our previous approach to tagging and morphological disambiguation for Turkish text had employed a constraint-based approach (Oflazer and Kuru6z, 1994) along the general lines of similar previous work for English (Karlsson et al., 1995; Voutilainen et al., 1992; Voutilainen and Tapanainen, 1993) . Although the results obtained there were reasonable, the fact that all constraint rules were hand crafted, posed a rather serious impediment to the generality and improvement of the system.", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 150, |
| "text": "(Oflazer and Kuru6z, 1994)", |
| "ref_id": null |
| }, |
| { |
| "start": 212, |
| "end": 235, |
| "text": "(Karlsson et al., 1995;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 236, |
| "end": 261, |
| "text": "Voutilainen et al., 1992;", |
| "ref_id": null |
| }, |
| { |
| "start": 262, |
| "end": 295, |
| "text": "Voutilainen and Tapanainen, 1993)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper we present a constraint-based morphological disambiguation approach that uses unsupervised learning component to discover some of the constraints it uses in conjunction with hand-crafted rules. It is specifically applicable to languages with productive inflectional and derivational morphological processes, such as Turkish, where morphological ambiguity has a rather different nature than that found in languages like English. Our approach starts with a set of corpus-independent hand-crafted rules that reduce morphological ambiguity (hence improve precision) without sacrificing recall. It then uses an untagged training corpus in which all lexical items have been annotated with all possible morphological analyses, incrementally proposing and evaluating additional (possibly corpus dependent) constraints for disambiguation of morphological parses using the constraints imposed by unambiguous contexts. These rules choose or delete parses with specified features. In certain respects, our approach has been motivated by Brill's recent work (Brill, 1995b) , but with the observation that his transformational approach is not directly applicable to languages like Turkish, where tags associated with forms are not predictable in advance.", |
| "cite_spans": [ |
| { |
| "start": 1059, |
| "end": 1073, |
| "text": "(Brill, 1995b)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In the following sections, we present an overview of the morphological disambiguation problem, highlighted with examples from Turkish.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We then present the details of our approach and results. We finally conclude after a discussion and evaluation of our results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In almost all languages, words are usually ambiguous in their parts-of-speech or other lexical features, and may represent lexical items of different syntactic categories, or morphological structures depending on the syntactic and semantic context. Part-of-speech (POS) tagging involves assigning every word its proper part-of-speech based upon the context the word appears in. In English, for example a word such as set can be a verb in certain contexts (e.g., He set the table for dinner) and a noun in some others (e.g., We are now facing a whole set of problems).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagging and Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "In Turkish, there are ambiguities of the sort above. However, the agglutinative nature of the language usually helps resolution of such ambiguities due to the restrictions on morphotactics. On the other hand, this very nature introduces another kind of ambiguity, where a whole lexical form can be morphologically interpreted in many ways not predictable in advance. For instance, our full-scale morphological analyzer for Turkish returns the following set of parses for the word oysa: 1,2 iOutput of the morphological anaJyzer is edited for clarity, and English glosses have been given.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagging and Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "2Glosses are given as linear feature value sequences corresponding to the morphemes (which are not shown). The feature names are as follows: CAT-major category, TYPE-minor category, R00T-main root form, AGR -number and person agreement, POSS-possessive agreement, CASE surface case, CONV-conversion to the category following with a certain suffix indicated by the argument after that, TAMl-tense, aspect, mood marker 1, SENSE-verbal polarity, DES-desire mood, IMP-imperative mood, 0PToptative mood, COND-Conditional Here, the original root is verbal but the final partof-speech is adjectival. In general, the ambiguities of the forms that come before such a form in text can be resolved with respect to its original (or intermediate) parts-of-speech (and inflectional features), while the ambiguities of the forms that follow can be resolved based on its final part-of-speech.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagging and Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "The main intent of our system is to achieve a morphological ambiguity reduction in the text by choosing for a given ambiguous token, a subset of its ZWith a slightly different but nevertheless common glossing convention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagging and Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "4Upper cases in morphological output indicates one of the non-ASCII special Turkish characters: e.g., G denotes ~, U denotes /i, etc. parses which are not disallowed by the syntactic context it appears in. It is certainly possible that a given token may have multiple correct parses, usually with the same inflectional features or with inflectional features not ruled out by the syntactic context. These can only be disambiguated usually on semantic or discourse constraint grounds. 5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagging and Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "We consider a token fully disambiguated if it has only one morphological parse remaining after automatic disambiguation. We consider as token as correctly disambiguated, if one of the parses remaining for that token is the correct intended parse. 6 We evaluate the resulting disambiguated text by a number of metrics defined as follows (Voutilainen, 1995a) :", |
| "cite_spans": [ |
| { |
| "start": 336, |
| "end": 356, |
| "text": "(Voutilainen, 1995a)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagging and Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "#Parses Ambiguity - #Tokens Recall = #Tokens Correctly Disambiguatcd #Tokens ~Tokcns Correctly Disambiguated Precision = ~Parses", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagging and Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "In the ideal case where each token is uniquely and correctly disambiguated with the correct parse, both recall and precision will be 1.0. On the other hand, a text where each token is annotated with all possible parses, 7 the recall will be 1.0 but the precision will be low. The goal is to have both recall and precision as high as possible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagging and Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagging and Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "This section outlines our approach to constraintbased morphological disambiguation incorporating unsupervised learning component. Our system with the structure presented in Figure 1 has three main components:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 173, |
| "end": 181, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Constraint-based Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "1. the preprocessor, 2. the learning module, and 3. the morphological disambiguation module.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint-based Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "Preprocessing is common to both the learning and the morphological disambiguation modules. The module takes as input to the system raw Turkish text and preprocesses it in a manner to be described shortly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint-based Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "If the text is to be used for training, the learning module then 1. applies an initial set of linguistically motivated hand-crafted constraint rules to choose and/or delete certain parses, and 5For instance the third and fourth parses for oysa above.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint-based Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "6It is certainly possible that, a parse that is deleted may also be a valid parse in that context. rAssuming no unknown words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint-based Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "2. uses an unsupervised learning procedure to induce some additional (an possibly corpus dependent) rules to choose and delete some parses. Morphological disambiguation of previously unseen text proceeds as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint-based Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "1. The hand-crafted rules are applied first.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint-based Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "2. Certain parses are deleted using context statistics on the corpus to be tagged.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint-based Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "3. Rules learned to choose and delete parses are then applied.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint-based Morphological Disambiguation", |
| "sec_num": null |
| }, |
| { |
| "text": "The preprocessing module takes as input a Turkish text, segments it into sentences using various heuristics about punctuation, tokenizes and runs it through a wide-coverage high-performance morphological analyzer developed using two-level morphology tools by Xerox (Karttunen, 1993) . This module also performs a number of additional functions:", |
| "cite_spans": [ |
| { |
| "start": 265, |
| "end": 282, |
| "text": "(Karttunen, 1993)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Preprocessor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 it groups lexicalized collocations such as idiomatic forms, semantically coalesced forms such as proper noun groups, certain numeric forms, etc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Preprocessor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 it groups any compound verb formations which are formed by a lexically adjacent, direct or oblique object, and a verb, which for the purposes of syntactic analysis, may be considered as single lexical item: e.g., saygz durmak (to pay respect), kafay~ yemek (literally t0 eat the head -to get mentally deranged), etc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Preprocessor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 it groups non-lexicalized collocations: Turkish abounds with various non-lexicalized collocations where the sentential role of the collocation has (almost) nothing to do with the partsof-speech of the individual forms involved. Almost all of these collocations involve duplications, and have forms like w + x w + y where w is the duplicated string comprising the root and certain sequence of suffixes and x and y are possibly different (or empty) sequences of other suffixes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Preprocessor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The following is a list of multi-word constructs for Turkish that we handle in our preprocessor. This list is not meant to be comprehensive, and new construct specifications can easily be added. It is conceivable that such a functionality can be used in almost any language. (See Oflazer and Kuru6z (1994) and KuruSz (1994) for details of all other forms for Turkish.) The set of features selected for each part-ofspeech category is determined by a template and hence is controllable, permitting experimentation with differing levels of information. The information selected for stems are determined by the category of the stem itself recursively.", |
| "cite_spans": [ |
| { |
| "start": 280, |
| "end": 305, |
| "text": "Oflazer and Kuru6z (1994)", |
| "ref_id": null |
| }, |
| { |
| "start": 310, |
| "end": 323, |
| "text": "KuruSz (1994)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Preprocessor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Under certain circumstances where a token has two or more parses that agree in the selected features, those parses will be represented by a single projected parse, hence the number of parses in the (projected) training corpus may be smaller than the number of parses in the original corpus. For example, the feature structure above is projected into a feature structure such as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Preprocessor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "-CAT ADJ [OAT NOUN ]] |AGR 3SG |POSS 1SG STEM /CASE LOC /STEM [CAT VERB [SUFFIX DIK SUFFIX REL 3.2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Preprocessor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Unknown Words Although the coverage of our morphological analyzer for Turkish (Oflazer, 1993) , with about 30,000 root words and about 35,000 proper names, is very satisfactory, it is inevitable that there will be forms in the corpora being processed that are not recognized by the morphological analyzer. These are almost always foreign proper names, words adapted into the language and not in the lexicon, or very obscure technical words. These are nevertheless inflected (using Turkish word formation paradigms) with inflectional features demanded by the syntactic context and sometimes even go through derivational processes. For improved disambiguation, one has to at least recover any morphological features even if the root word is unknown. To deal with this, we have made the assumption that all unknown words have nominal roots, and built a second morphological analyzer whose (nominal) root lexicon recognizes S + where S is the Turkish surface alphabet (in the two-level morphology sense), but then tries to interpret an arbitrary postfix of the unknown word as a sequence of Turkish suffixes subject to all morphographemic constraints. For instance when a form such as talkshowumun is entered, this second analyzer hypothesizes the following analyses: which are then processed just like any other during disambiguation.S This however is not a sufficient solution for some very obscure situations where for the foreign word is written using its, say, English orthography, while suffixation goes on according to its English pronunciation, which may make some constraints like vowel 8Incidentally, the correct analysis is the 6 th, meaning o.[ my talk show. The 5 th one has the same morphological features except for the root. harmony inapplicable on the graphemic representation, though harmony is in effect in the pronunciation. For instance one sees the form Carter'a where the last vowel in Carter is pronounced so that it harmonizes with a in Turkish, while the e in the surface form does not harmonize with a. We are nevertheless rather satisfied with our solution as in our experiments we have noted that well below 1% of the forms remain as unknown and these are usually item markers in formatted or itemized lists, or obscure foreign acronyms.", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 93, |
| "text": "(Oflazer, 1993)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Preprocessor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "I.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Preprocessor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The To illustrate the flavor of our rules we can give the following examples. The first example chooses parses with case feature ablative, preceding an unambiguous postposition which subcategorizes for an ablative nominal form. which selects and adjective parse following a determiner, adjective sequence, and before a noun without a possessive marker. Another sample rule is: which chooses a nominal form with a possessive marker 2SG following a pronoun with 2SG agreement and genitive case, enforcing the simplest form of noun-noun form noun phrase constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint Rules", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Our system uses two hand-crafted sets of rules, in combination with the rules that are learned by unsupervised learning:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint Rules", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "1. We use an initial set of hand-crafted choose rules to speed-up the learning process by creating disambiguated contexts over which statistics can be collected. These rules (examples of which are given above) are independent of the corpus that is to be tagged, and are linguistically motivated. They enforce some very common feature patterns especially where word order is rather strict as in NP's or PP's. 9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint Rules", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The motivation behind these rules is that they should improve precision without sacrificing recall. These are rules which impose very tight constraints so as not to make any recall errors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint Rules", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Our experience is that after processing with these rules, the recall is above 99% while precision improves by about 20 percentage points.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint Rules", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Another important feature of these rules is that they are applied even if the contexts are also ambiguous, as the constraints are tight. That is, if each token in a sequence of, say, three ambiguous tokens have a parse matching one of the context constraints (in the proper order), then all of them are simultaneously disambiguated.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint Rules", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In hand crafting these rules, we have used our experience from an earlier tagger (Oflazer and Kuruhz, 1994 ). Currently we use 288 handcrafted choose rules.", |
| "cite_spans": [ |
| { |
| "start": 81, |
| "end": 106, |
| "text": "(Oflazer and Kuruhz, 1994", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint Rules", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "2. We also use a set of hand-crafted heuristic delete rules to get rid of any very low probability parses. For instance, in Turkish, postpositions have rather strict contextual constraints and if there are tokens remaining with multiple parses one of which is a postposition reading, we delete that reading. Our experience is that these rules improve precision by about 10 to 12 additional percentage points with negligible impact on recall. Currently we use 43 hand-crafted delete rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint Rules", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Given a training corpus, with tokens annotated with possible parses (projected over selected features), we first apply the hand-crafted rules. Learning then goes on as a number of iterations over the training corpus. We proceed with the following schema which is an adaptation of Brill's formulation (Brill, 1995b) :", |
| "cite_spans": [ |
| { |
| "start": 300, |
| "end": 314, |
| "text": "(Brill, 1995b)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "9Turkish is a free constituent order language whose unmarked order is SOV.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "1. We generate a table, called incontext, of all possible unambiguous contexts which contain a token with an unambiguous (projected) parse, along with a count of how many times this parse occurs unambiguously in exactly the same context in the corpus. We refer to an entry in table with a context C and parse P as incontext(C, P). 2. We also generate a table, called count, of all unambiguous parses in the corpus along with a count of how many times this parse occurs in the corpus. We refer to an entry in this table with a given parse P, as count(P).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "3. We then start going over the corpus token by token generating contexts as we go. incontext ( C, Pmaz) 6. We order all candidate rules generated during one pass over the corpus, along two dimensions:", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 104, |
| "text": "( C, Pmaz)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "(a) we group candidate rules by context specificity (given by the order in Section 3.3), (b) in each group, we order rules by descending score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We maintain score thresholds associated with each context specificity group: the threshold of a less specific group being higher than that of a more specific group. We then choose the top scoring rule from any group whose score equals or exceeds the threshold associated with that group. The reasoning is that we prefer more specific and/or high scoring rules: high scoring rules are applicable, in general, in more places; while more specific rules have stricter constraints and more accurate morphological parse selections, We have noted that choosing the highest scoring rule at every step may sometimes make premature commitments which can not be undone later.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "1\u00b0Either of LC or RC may be empty.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "7. The selected rules are then applied in the matching contexts and ambiguity in those contexts is reduced. During this application the following are also performed:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "(a) if the application results in an unambiguous parse in the context of the applied rule, we increment the count associated with this parse in table count. We also update the incontext table for the same context, and other contexts which contains the disambiguated parse. (b) we also generate any new unambiguous contexts that this newly disambiguated token may give rise to, and add it to the incontext table along with count 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Note that for efficiency reasons, rule candidates are not generated repeatedly during each pass over the corpus, but rather once at the beginning, and then when selected rules are applied to very specific portions of the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "8. If there are no rules in any group that exceed its threshold, group thresholds are reduced by multiplying by a damping constant d (0 < d < 1) and iterations are continued.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "9. If the threshold for the most specific context falls below a given lower limit, the learning process is terminated.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Some of the rules that have been generated by this learning process are given below:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "1. Disambiguate around a coordinating conjunction: 3.4.1 Contexts induced by morphological derivation The procedure outlined in the previous section has to be modified slightly in the case when the unambiguous token in the rc position is a morphologically derived form. For such cases one has to take into consideration additional pieces of information. where the determiner is attached to the noun and the whole phrase is then taken as a VP although the verbal marker is on the second lexical item. If, in this case, the token bit is considered to neighbor a token whose top level inflectional features indicate it is a verb, it is likely that bit will be chosen as an adverb as it precedes a verb, whereas the correct parse is the determiner reading.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "[", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In such a case where the right context of an ambiguous token is a derived form, one has to consider as the right context, both the top level features of final form, and the stem from which it was derived. During the set-up of the incontext table, such a context is entered twice: once with the top level feature constraints of the immediate unambiguous right-context, and once with the feature constraints of the stem. The unambiguous token in the right context is also entered to the count table once with its top level feature structure and once with the feature structure of the stem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "When generating candidate choose or delete rules, for contexts where rc is a derived form and rrc is empty, we actually generate two candidates rules for each ambiguous token in that context: 1. if llc, ic and rc then choose/delete Pi.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "2. if llc, Ic and stem(re) then choose/delete P~.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "These candidate rules are then evaluated as described above. In general all derivations in a lexical form have to be considered though we have noted that considering one level gives satisfactory results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Choose Rules", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Some morphological features are only meaningful or relevant for disambiguation only when they appear to the left or to the right of the token to be disambiguated. For instance, in the case of Turkish, the CASE feature of a nominal form is only useful in the immediate left context, while the POSS (the possessive agreement marker) is useful only in the right context. If these features along with their possible values are included in context positions where they are not relevant, they \"split\" scores and hence cause the selection of some other irrelevant rule. Using the maxim that union gives strength, we create contexts so that features not relevant to a context position are not included, thereby treating context that differ in these features as same. 11", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ignoring Features", |
| "sec_num": "3.4.2" |
| }, |
| { |
| "text": "For choosing delete rules we have experimented with two approaches. One obvious approach is to use the formulation described above for learning choose rules, but instead of generating choose rules, pick the parses that score (significantly) worse than and generate delete rules for such parses. We have implemented this approach and found that it is not very desirable due to two reasons:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Delete Rules", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "1. it generates far too many delete rules, and 2. it impacts recall seriously without a corresponding increase in precision.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Delete Rules", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "The second approach that we have used is considerably simpler. We first reprocess the training corpus but this time use a second set of projection templates, and apply initial rules, learned choose rules and heuristic delete rules. Then for every unambiguous context C = (LC, RC), with either an immediate left, or an immediate right components or both (so n Obviously these features are specific to a language. the contexts used here are the last 3 in Section 3.3), a score incontext ( C, Pi ) count ( Pi ) for each parse Pi of the (still) ambiguous token, is computed. Then, delete rules of the sort if LC and RC then delete Pi are generated for all parses with a score below a certain fraction (0.2 in our experiments) of the highest scoring parse. In this process, our main goal is to remove any seriously improbable parses which may somehow survive all the previous choose and delete constraints applied so far. Using a second set of templates which are more specific than the templates used during the learning of the choose rules, we introduce features we were originally projected out.", |
| "cite_spans": [ |
| { |
| "start": 485, |
| "end": 494, |
| "text": "( C, Pi )", |
| "ref_id": null |
| }, |
| { |
| "start": 501, |
| "end": 507, |
| "text": "( Pi )", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Delete Rules", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "Our experience has been that less strict contexts (e.g., just alc or rc) generate very useful delete rules, which basically weed out what can (almost) never happen as it is certainly not very feasible to formulate hand-crafted rules that specify what sequences of features are not possible. Some of the interesting delete rules learned here are:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Delete Rules", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "1. Delete the first of two consecutive verb parses: 2. Delete accusative case marked noun parse before a postposition that subcategorizes for a nominative noun: 3. Delete the accusative case marked parse without any possessive marking, if the previous form has genitive case marking (signaling a genitivepossessive NP construction): ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Delete Rules", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "After applying hand-crafted rules to a text to be disambiguated we arrive at a state where ambiguity is about 1.10 to 1.15 parses per token (down from 1.70 to 1.80 parses per token) without any serious loss on recall. This state allows statistics to be collected over unambiguous contexts. To remove additional parses which never appear in any unambiguous context we use the scoring described above for choosing delete rules, to discard parses on the current text based on context statistics} 2 We make three passes 12Please note that delete rules learned may be applied to future texts to be disambiguated, while this step is over the current text, scoring parses in unambiguous contexts of the form used in generating delete rules, and discarding parses whose score is below a certain fraction of the maximum scoring parse, on the fly. The only difference with the scoring used for delete rules, is that the score of a parse Pi here is a weighted sum of the quantity incontext (C, Pi) count ( Pi ) evaluated for three contexts in the case both the lc and rc are unambiguous/", |
| "cite_spans": [ |
| { |
| "start": 979, |
| "end": 986, |
| "text": "(C, Pi)", |
| "ref_id": null |
| }, |
| { |
| "start": 993, |
| "end": 999, |
| "text": "( Pi )", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using context statistics to delete parses", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "Given a new text annotated with all morphological parses (this time the parses are not projected), we proceed with the following steps for disambiguation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps in Disamblguating a Text", |
| "sec_num": "3.7" |
| }, |
| { |
| "text": "1. The initial hand-crafted choose rules are applied first. These rules always constrain top level inflectional features, and hence, any stems fromn derivational processes are not considered unless explicitly indicated in the constraint itself.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps in Disamblguating a Text", |
| "sec_num": "3.7" |
| }, |
| { |
| "text": "2. The hand-crafted delete clean-up rules are applied.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps in Disamblguating a Text", |
| "sec_num": "3.7" |
| }, |
| { |
| "text": "3. Context statistics described in the preceding section are used to discard further parses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps in Disamblguating a Text", |
| "sec_num": "3.7" |
| }, |
| { |
| "text": "4. The choose rules that have been learned earlier, are then repeatedly applied to unambiguous contexts, until no more ambiguity reduction is possible. During the application of these rules, if the immediate right context of a token is a derived form, then the stem of the right context is also checked against the constraint imposed by the rule. So if the rule right context constraint subsumes the top level feature structure or the stem feature structure, then the rule succeeds and is applied if all other constraints are also satisfied.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps in Disamblguating a Text", |
| "sec_num": "3.7" |
| }, |
| { |
| "text": "5. Finally, the delete rules that have been learned are applied repeatedly to unambiguous contexts, until no more ambiguity reduction is possible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps in Disamblguating a Text", |
| "sec_num": "3.7" |
| }, |
| { |
| "text": "We have applied our learning system to two Turkish texts. Some statistics on these texts are given in Table 1 . The first text labeled ARK is a short text on near eastern archaeology. The second text from which fragments whose labels start with C are derived, is a book on early 20 ~h history of Turkish Republic.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 102, |
| "end": 109, |
| "text": "Table 1", |
| "ref_id": "TABREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In Table 1 , the tokens considered are that are generated after morphological analysis, unknown word processing and any lexical coalescing is done. The applied to the current text on which disambiguation is performed. words that are unknown are those that could not even be processed by the unknown noun processor. Whenever an unknown word had more than one parse it was counted under the appropriate group. We learned rules from ARK itself, and on the first 500, 1000, and 2000 sentence portions of C2400. C270 which was from the remaining 400 sentences of C2400 was set aside for testing. Gold standard disambiguated versions for ARK, C270 were prepared manually to evaluate the automatically tagged versions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 10, |
| "text": "Table 1", |
| "ref_id": "TABREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our results are summarized in the following set of tables. Tables 2 and 3 give the ambiguity, recall and precision initially, after hand-crafted rules are applied, and after the contextual statistics are used to remove parses -all applications being cumulative. The rows labeled BASE give the initial state of the text to be tagged. The rows labeled INITIAL CHOOSE give the state after hand-crafted choose rules are applied, while the rows labeled INI-TIAL DELETE give the state after the hand-crafted choose and delete rules are applied. The rows labeled CONTEXT STATISTICS give the state after the rules are applied and context statistics are used (as described earlier) to remove additional parses.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 59, |
| "end": 73, |
| "text": "Tables 2 and 3", |
| "ref_id": "TABREF13" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Pre. Tables 5 and 6 present the results of further disambiguation of ARK, and C270 using rules learned from training texts C500, C1000, C2000 and ARK. These rules are applied after the last stage in the ta-bles above. 13 The number of rules learned are given in Table 4 Table 5 : Average parses, recall and precision for text ARK after applying learned rules. Table 7 gives some additional statistical results at the sentence level, for each of the test texts. The columns labeled UA/C and A/C give the number and percentage of the sentences that are correctly disambiguated with one parse per token, and with more than one parse for at least one token, respectively. The columns labeled 1, 2, 3, and >3 denote the number and percentage of sentences that have 1, 2, 3, and >3 tokens, with all remaining parses incorrect. It can be seen that well 60% of the sentences are correctly morphologically disambiguated with very small number of ambiguous parses remaining.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 19, |
| "text": "Tables 5 and 6", |
| "ref_id": null |
| }, |
| { |
| "start": 262, |
| "end": 269, |
| "text": "Table 4", |
| "ref_id": "TABREF16" |
| }, |
| { |
| "start": 270, |
| "end": 277, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 360, |
| "end": 367, |
| "text": "Table 7", |
| "ref_id": "TABREF18" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Ambiguity Recall", |
| "sec_num": null |
| }, |
| { |
| "text": "13Please note for ARK, in the first two rows, the training and the test texts are the same.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stage", |
| "sec_num": null |
| }, |
| { |
| "text": "nLearning iterations have been stopped when the maximum rule score fell below 7. Table 6 : Average parses, recall and precision for text 270 after applying learned rules.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 81, |
| "end": 88, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Stage", |
| "sec_num": null |
| }, |
| { |
| "text": "We can make a number of observations from our experience: Hand-crafted rules go a long way in improving precision substantially, but in a language like Turkish, one has to code rules that allow no, or only carefully controlled derivations, otherwise lots of things go massively wrong. Thus we have used very tight and conservative rules in hand-crafting. Although the additional impact of choose and rules that are induced by the unsupervised learning is not substantiM, this is to be expected as the stage at which they are used is when all the \"easy\" work has been done and the more notorious cases remain. An important class of rules we explicitly have avoided hand crafting are rules for disambiguating around coordinating conjunctions. We have noted that while learning choose rules, the system zeroes in rather quickly on these contexts and comes up with rather successful rules for conjunctions. Similarly, the delete rules find some interesting situations which would be virtually impossible to enumerate. Although it is easy to formulate what things can go together in a context, it is rather impossible to formulate what things can not go together. We have also attempted to learn rules directly without applying any hand-crafted rules, but this has resulted in a failure with the learning process getting stuck fairly early. This is mainly due to the lack of sufficient unambiguous contexts to bootstrap the whole disambiguation process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "From analysis of our results we have noted that trying to choose one correct parse for every token is rather ambitious (at least for Turkish). There are a number of reasons for this:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "There are genuine ambiguities. The word o is either a personal or a demonstrative pronoun (in addition to being a determiner). One simply can not choose among the first two using any amount of contextual information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "A given word may be interpreted in more than one way but with the same inflectional features, or with features not inconsistent with the syntactic context. This usually happens when the root of one of the forms is a proper prefix of the root of the other one. One would need serious amounts of semantic, or statistical root word and word form preference information for resolving these. where again with have a similar problem. It may be possible to resolve this one using subcategorization constraints on the object of the verb kur assuming it is in the very near preceding context, but this may be very unlikely as Turkish allows arbitrary adjuncts between the object and the verb.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Turkish allows sentences to consist of a number of sentences separated by commas. Hence locating a verb in the middle of a sentence is rather difficult, as certain verbal forms also have an adjectival reading, and punctuation is not very helpful as commas have many other uses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The distance between two constituents (of, say, a noun phrase) that have to agree in various morphosyntactic features may be arbitrar-ily long and this causes occasional mislnatches, especially if the right nominal constituent has a surface plural marker which causes a 4-way ambiguity, as in masalam. Choosing among the last three is rather problematic if the corresponding genitive form to force agreement with is outside the context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Among these problems, the most crucial is the second one which we believe can be solved to a great extent by using root word preference statistics and word form preference statistics. We are currently working on obtaining such statistics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "This paper has presented a rule-based morphological disambiguation approach which combines a set of hand-crafted constraint rules and learns additional rules to choose and delete parses, from untagged text in an unsupervised manner. We have extended the rule learning and application schemes so that the impact of various morphological phenomena and features are selectively taken into account. We have applied our approach to the morphological disambiguation of Turkish, a free-constituent order language, with agglutinative morphology, exhibiting productive inflectional and derivational processes. We have also incorporated a rather sophisticated unknown form processor which extracts any relevant inflectional or derivational markers even if the root word is unknown.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Our results indicate that by combining these hand-crafted, statistical and learned information sources, we can attain a recall of 96 to 97% with a corresponding precision of 93 to 94% and ambiguity of 1.02 to 1.03 parses per token, on test texts, however the impact of the rules that are learned is not significant as hand-crafted rules do most of the easy work at the initial stages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank Xerox Advanced Document Systems, and Lauri Karttunen of Xerox Pare and of Rank Xerox Research Centre (Grenoble) for providing us with the two-level transducer development software on which the morphological and unknown word recognizer were implemented. This research has been supported in part by a NATO Science for Stability Grant TU-LANGUAGE.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": "6" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A simple-rule based part-of-speech tagger", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the Third Conference on Applied Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Brill. 1992. A simple-rule based part-of-speech tagger. In Proceedings of the Third Conference on Applied Natural Language Processing, Trento, Italy.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Some advances in rule-based part of speech tagging", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the Twelfth National Conference on Articial Intelligence (AAAI-9~)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Brill. 1994. Some advances in rule-based part of speech tagging. In Proceedings of the Twelfth Na- tional Conference on Articial Intelligence (AAAI- 9~), Seattle, Washinton.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Transformation-based errordriven learning and natural language processing: A case study in part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Computational Linguistics", |
| "volume": "21", |
| "issue": "4", |
| "pages": "543--566", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Brill. 1995a. Transformation-based error- driven learning and natural language processing: A case study in part-of-speech tagging. Computa- tional Linguistics, 21(4):543-566, December.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Unsupervised learning of disambiguation rules for part of speech tagging", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the Third Workshop on Very Large Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Brill. 1995b. Unsupervised learning of dis- ambiguation rules for part of speech tagging. In Proceedings of the Third Workshop on Very Large Corpora, Cambridge, MA, June.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A stochastic parts program and a noun phrase parser for unrestricted text", |
| "authors": [ |
| { |
| "first": "Kenneth", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Proceedings of the Second Conference on Applied Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenneth W. Church. 1988. A stochastic parts pro- gram and a noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing, Austin, Texas.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A practical part-of-speech tagger", |
| "authors": [ |
| { |
| "first": "Doug", |
| "middle": [], |
| "last": "Cutting", |
| "suffix": "" |
| }, |
| { |
| "first": "Julian", |
| "middle": [], |
| "last": "Kupiec", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Pedersen", |
| "suffix": "" |
| }, |
| { |
| "first": "Penelope", |
| "middle": [], |
| "last": "Sibun", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the Third Conference on Applied Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Doug Cutting, Julian Kupiec, Jan Pedersen, and Penelope Sibun. 1992. A practical part-of-speech tagger. In Proceedings of the Third Conference on Applied Natural Language Processing, Trento, Italy.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Grammatical category disambiguation by statistical optimization", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Steven", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Derose", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Computational Linguistics", |
| "volume": "14", |
| "issue": "1", |
| "pages": "31--39", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steven J. DeRose. 1988. Grammatical category dis- ambiguation by statistical optimization. Compu- tational Linguistics, 14(1):31-39.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Parsing Turkish using the Lexical-Functional Grammar formalism", |
| "authors": [ |
| { |
| "first": "Zelal", |
| "middle": [], |
| "last": "Giingsrdii", |
| "suffix": "" |
| }, |
| { |
| "first": "Kemal", |
| "middle": [], |
| "last": "Oflazer", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Machine Translation", |
| "volume": "11", |
| "issue": "4", |
| "pages": "293--319", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zelal GiingSrdii and Kemal Oflazer. 1995. Pars- ing Turkish using the Lexical-Functional Gram- mar formalism. Machine Translation, 11(4):293- 319.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Morphological parsing and the lexicon", |
| "authors": [ |
| { |
| "first": "Jorge", |
| "middle": [], |
| "last": "Hankamer", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Lexical Representation and Process", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jorge Hankamer. 1989. Morphological parsing and the lexicon. In W. Marslen-Wilson, editor, Lexical Representation and Process. MIT Press.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Constraint Grammar-A Language-Independent System for Parsing Unrestricted Text", |
| "authors": [ |
| { |
| "first": "Fred", |
| "middle": [], |
| "last": "Karlsson", |
| "suffix": "" |
| }, |
| { |
| "first": "Atro", |
| "middle": [], |
| "last": "Voutilainen", |
| "suffix": "" |
| }, |
| { |
| "first": "Juha", |
| "middle": [], |
| "last": "Heikkils", |
| "suffix": "" |
| }, |
| { |
| "first": "Arto", |
| "middle": [], |
| "last": "Anttila", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fred Karlsson, Atro Voutilainen, Juha HeikkilS, and Arto Anttila. 1995. Constraint Grammar-A Language-Independent System for Parsing Unre- stricted Text. Mouton de Gruyter.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Finite-state lexicon compiler. XEROX", |
| "authors": [ |
| { |
| "first": "Lauri", |
| "middle": [], |
| "last": "Karttunen", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lauri Karttunen. 1993. Finite-state lexicon com- piler. XEROX, Palo Alto Research Center-Tech- nical Report, April.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Tagging and morphological disambiguation of Turkish text", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ilker Kuru6z", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "ilker Kuru6z. 1994. Tagging and morphological disambiguation of Turkish text. Master's thesis, Bilkent University, Department of Computer En- gineering and Information Science, July.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Learning morpho-lexical probabilities fi'om an untagged corpus with an application to Hebrew", |
| "authors": [ |
| { |
| "first": "Moshe", |
| "middle": [], |
| "last": "Levinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Uzzi", |
| "middle": [], |
| "last": "Ornan", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Itai", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Computational Linguistics", |
| "volume": "21", |
| "issue": "3", |
| "pages": "383--404", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Moshe Levinger, Uzzi Ornan, and Alon Itai. 1995. Learning morpho-lexical probabilities fi'om an untagged corpus with an application to He- brew. Computational Linguistics, 21(3):383-404, September.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Tagging and morphological disambiguation of Turkish text", |
| "authors": [ |
| { |
| "first": "Kemal", |
| "middle": [], |
| "last": "Oflazer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ilker Kurusz", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 4 th Applied Natural Language Processing Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "144--149", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kemal Oflazer and ilker KuruSz. 1994. Tagging and morphological disambiguation of Turkish text. In Proceedings of the 4 th Applied Natural Language Processing Conference, pages 144-149. ACL, Oc- tober.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Two-level description of Turkish morphology", |
| "authors": [ |
| { |
| "first": "Kemal", |
| "middle": [], |
| "last": "Oflazer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the Sixth Conference of the European Chapter of the Association for Computational Linguistics, April. A full version appears in Literary and Linguistic Computing", |
| "volume": "9", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kemal Oflazer. 1993. Two-level description of Turk- ish morphology. In Proceedings of the Sixth Con- ference of the European Chapter of the Associa- tion for Computational Linguistics, April. A full version appears in Literary and Linguistic Com- puting, Vol.9 No.2, 1994.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Ambiguity resolution in a reductionistic parser", |
| "authors": [ |
| { |
| "first": "Atro", |
| "middle": [], |
| "last": "Voutilainen", |
| "suffix": "" |
| }, |
| { |
| "first": "Pasi", |
| "middle": [], |
| "last": "Tapanainen", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of EACL'93", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Atro Voutilainen and Pasi Tapanainen. 1993. Am- biguity resolution in a reductionistic parser. In Proceedings of EACL'93, Utrecht, Holland.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Morphological disambiguation", |
| "authors": [ |
| { |
| "first": "Atro", |
| "middle": [], |
| "last": "Voutilainen", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Constraint Grammar-A Language-Independent System for Parsing Unrestricted Tea:t, chapter 5", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Atro Voutilainen. 1995a. Morphological disam- biguation. In Fred Karlsson, Atro Voutilainen, Juha Heikkilg, and Arto Anttila, editors, Con- straint Grammar-A Language-Independent Sys- tem for Parsing Unrestricted Tea:t, chapter 5. Mouton de Gruyter.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A syntax-based part-ofspeech analyzer", |
| "authors": [ |
| { |
| "first": "Atro", |
| "middle": [], |
| "last": "Voutilainen", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the Seventh Conference of the European Chapter of the Association of Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Atro Voutilainen. 1995b. A syntax-based part-of- speech analyzer, In Proceedings of the Seventh Conference of the European Chapter of the Asso- ciation of Computational Linguistics, Dublin, Ire- land.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "text": "The structure of the constraint-based morphological disambiguation system.--roots and certain relevant features such as subcategorization requirements for closed classes of words such as connectives, postpositions, etc.", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "num": null, |
| "text": "The score of the candidate rule is then computed as:Scorei = incontext(C, Pi) -count(P~) count(Pmax)\"", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "num": null, |
| "text": "llc : [] , lc: [] , delete : [cat : verb] , re: [[cat :verb]],rrc: []]", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "num": null, |
| "text": "cat :noun, agr : 3SG,poss : NONE, case : acc] , rc : [ [cat : postp, subcat : nora] ] ,rrc : [] ] .", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "type_str": "figure", |
| "num": null, |
| "text": "llc: [] , lc : [ [cat : noun, agr : 3SG,poss : NONE, case : gen] ], delete : [cat :noun, agr: 3SG ,poss : NONE, case : ace], re: [] ,rrc: []].", |
| "uris": null |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td>TOKEN IZATION</td><td>MORPHOLOGY</td><td>NON-LEXICAL COLLOCATION</td><td>UNKNOWN WORD</td><td>FORMAT CONVERSION</td><td/><td>MORPHOLOGICAL DISAM BIGUATION</td></tr><tr><td/><td/><td>RECOGNIZER</td><td>PROCESSOR</td><td>( / PRO/ECTION )</td><td/><td>MODULE</td></tr><tr><td/><td/><td>PREPROCESSOR</td><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"3\">items have the parses</td></tr><tr><td/><td/><td/><td colspan=\"4\">[[CAT VERB] [ROOT yap] [SENSE P0S]</td></tr><tr><td/><td/><td/><td colspan=\"4\">[TAM1 AORIST ] [AGR 3SG]]</td></tr><tr><td/><td/><td/><td/><td>LEARNING</td><td colspan=\"2\">LEARNED RULES</td></tr><tr><td/><td/><td/><td colspan=\"4\">[[CAT VERB] [ROOT yap] [SENSE BEG] MOI)ULE</td></tr><tr><td/><td/><td/><td colspan=\"4\">[TAM1 AORIST ] [AGR 3SG]]</td></tr><tr><td/><td/><td/><td colspan=\"4\">respectively, the preprocessor generates the</td></tr><tr><td/><td/><td/><td colspan=\"2\">feature sequence</td><td/></tr><tr><td/><td/><td/><td colspan=\"4\">[[CAT VERB] [ROOT koS] [SENSE POS]</td></tr><tr><td/><td/><td/><td colspan=\"4\">[TAM1 AORIST] [AGR 3SG]</td></tr><tr><td/><td/><td/><td colspan=\"4\">[CONV ADVERB DUP-AOR] [TYPE TEMP]]</td></tr><tr><td/><td/><td/><td colspan=\"4\">3. duplicated verbal and derived adverbial</td></tr><tr><td/><td/><td/><td colspan=\"4\">forms with the same verbal root acting as</td></tr><tr><td/><td/><td/><td colspan=\"4\">temporal adverbs, e.g., gitti gideli,</td></tr><tr><td/><td/><td/><td colspan=\"4\">4. emphatic adjectival forms involving dupli-</td></tr><tr><td/><td/><td/><td colspan=\"4\">cation and question clitic, e.g., g71zel mi</td></tr><tr><td/><td/><td/><td colspan=\"4\">g~zel (beautiful question-clitic beautiful-</td></tr><tr><td/><td/><td/><td colspan=\"2\">very beautiful)</td><td/></tr><tr><td/><td/><td/><td colspan=\"4\">5. adjective Thus in the example above for geldi~imdeki, the</td></tr><tr><td/><td/><td/><td colspan=\"4\">following feature structure is generated:</td></tr><tr><td/><td/><td/><td colspan=\"4\">[[CAT VERB] [ROOT gel] [SENSE POS]</td></tr><tr><td/><td/><td/><td colspan=\"3\">[CONV NOUN DIK] [AGR 3SG]</td></tr><tr><td/><td/><td/><td colspan=\"3\">[P0SS ISG] [CASE LOC]</td></tr><tr><td/><td/><td/><td colspan=\"2\">[CONV ADJ REL]]</td><td/></tr><tr><td/><td/><td/><td>\"CAT</td><td>ADJ</td><td/></tr><tr><td/><td/><td/><td/><td>\"CAT</td><td>NOUN</td></tr><tr><td/><td/><td/><td/><td>AGR</td><td>3SG</td></tr><tr><td/><td/><td/><td/><td>POSS</td><td>iSG</td></tr><tr><td/><td/><td/><td/><td>CASE</td><td>LOC</td></tr><tr><td/><td/><td/><td>STEM</td><td/><td>CAT</td><td>VERB'</td></tr><tr><td/><td/><td/><td/><td>STEM</td><td>[ROOT</td><td>gel</td></tr><tr><td/><td/><td/><td/><td/><td>[.SENSE</td><td>POS</td></tr><tr><td colspan=\"3\">1. duplicated optative and 3SG verbal forms</td><td/><td>SUFFIX</td><td>DIK</td></tr><tr><td colspan=\"3\">functioning as manner adverb. An example</td><td>SUFFIX</td><td>~EL</td><td/></tr><tr><td/><td/><td/><td colspan=\"4\">\u2022 Finally, each such feature structure is then pro-</td></tr><tr><td/><td/><td/><td colspan=\"4\">jected on a subset of its features. The features</td></tr><tr><td/><td/><td/><td>selected are</td><td/><td/></tr><tr><td/><td/><td/><td colspan=\"4\">-inflectional and certain derivational mark-</td></tr><tr><td/><td/><td/><td colspan=\"4\">ers, and stems for open class of words,</td></tr></table>", |
| "text": "CONV ADVERB DUPi] [TYPE MANNER]]aorist verbal forms with root duplications and sense negation, functioning as temporal adverbs. For instance for the nonlexicalized collocation yapar yapmaz, where or noun duplications that act as manner adverbs, e.g., hzzh hzzh, evev,This module recognizes all such forms and coalesces them into new feature structures reflecting the final structure along with any inflectional information.\u2022 The preprocessor then converts each parse into a hierarchical feature structure so that the inflectional feature of the form with the last category conversion (if any) are at the top level.", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF4": { |
| "content": "<table><tr><td/><td/><td/><td>de-</td></tr><tr><td colspan=\"2\">creasing specificity:</td><td/><td/></tr><tr><td>i.</td><td>llc, Ic</td><td>....</td><td>rc, rrc</td></tr><tr><td>2.</td><td colspan=\"2\">llc, ic ....</td><td/></tr><tr><td/><td/><td/><td>rc, rrc</td></tr><tr><td>3.</td><td>ic</td><td/><td>rc</td></tr><tr><td>4.</td><td>lc</td><td/><td/></tr><tr><td/><td/><td/><td>rc</td></tr></table>", |
| "text": "system uses rules of the sort if LC and RC then choose PARSE or if LC and RC then delete PARSE", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF9": { |
| "content": "<table><tr><td>...</td><td>a</td><td colspan=\"2\">table+is</td></tr><tr><td>...</td><td>is</td><td>a table</td></tr><tr><td colspan=\"4\">where the first token has the morphological parses:</td></tr><tr><td colspan=\"4\">I. [[CAT ADJ] [ROOT bir] [TYPE CARDINAL]]</td></tr><tr><td>(one)</td><td/><td/></tr><tr><td colspan=\"4\">2. [[CAT ADJ] [ROOT bir] [TYPE DETERMINER]]</td></tr><tr><td>(a)</td><td/><td/></tr><tr><td colspan=\"4\">3. [[CAT ADVERB] [ROOT bir]]</td></tr><tr><td colspan=\"3\">(only/merely)</td></tr><tr><td colspan=\"4\">and the second form has the unambiguous morpho-</td></tr><tr><td colspan=\"2\">logical parse:</td><td/></tr><tr><td colspan=\"4\">1. [[CAT NOUN] [ROOT masa] [AGR 3SG] [POSS NONE]</td></tr><tr><td colspan=\"4\">[CASE NOM] [CONV VERB NONE]</td></tr><tr><td colspan=\"4\">[TAM1PRES] [AGR 3SG]] (is table)</td></tr><tr><td colspan=\"4\">which in hierarchical formcorresponds to the ~ature</td></tr><tr><td>structure:</td><td/><td/></tr><tr><td>\"C AT</td><td/><td>VERB</td></tr><tr><td colspan=\"2\">TAM1</td><td>PRES</td></tr><tr><td>%GR</td><td/><td>3SG</td></tr><tr><td/><td/><td colspan=\"2\">ROOT masa</td></tr><tr><td colspan=\"2\">STEM</td><td/><td>3SG</td></tr><tr><td/><td/><td>I POSS</td><td>NONE</td></tr><tr><td/><td/><td colspan=\"2\">LCASE NOM</td></tr><tr><td colspan=\"3\">SUFFIX NONE</td></tr><tr><td colspan=\"4\">In the syntactic context this fragment is interpreted</td></tr><tr><td>as</td><td/><td/></tr><tr><td/><td/><td/><td>VP</td></tr><tr><td/><td/><td colspan=\"2\">NP</td><td>+dlr</td></tr><tr><td/><td/><td>DET</td><td>NOUN</td></tr><tr><td/><td/><td>I bir</td><td>I masa</td></tr></table>", |
| "text": "We will motivate this using a simple example from Turkish. Consider the example fragment:... bir masa+dlr.", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF11": { |
| "content": "<table/>", |
| "text": "Statistics on Texts", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF13": { |
| "content": "<table><tr><td>Disambiguation Stage</td><td>Ambiguity</td><td>Recall (%)</td><td>Pre. (%)</td></tr><tr><td>BASE</td><td>1.719</td><td>100.00</td><td>58.18</td></tr><tr><td>INITIAL CHOOSE</td><td>1.353</td><td>99.16</td><td>73.27</td></tr><tr><td>INITIAL DELETE</td><td>1.130</td><td>98.73</td><td>87.24</td></tr><tr><td>CONTEXT STATISTICS</td><td>1.038</td><td>96.70</td><td>93.15</td></tr></table>", |
| "text": "Average parses, recall and precision for text ARK", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF14": { |
| "content": "<table/>", |
| "text": "Average parses, recall and precision for text C270", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF16": { |
| "content": "<table><tr><td colspan=\"3\">Disambiguation</td><td>Ambiguity</td><td/><td>Recall</td><td>Pre.</td></tr><tr><td/><td>Stage</td><td/><td/><td/><td>(%)</td><td>(\u00b0Z~)</td></tr><tr><td/><td/><td colspan=\"2\">Training Set ARK</td><td/><td/></tr><tr><td>LEARNED</td><td>DELETE</td><td colspan=\"2\">I Training Set C5O0 1.027</td><td>I</td><td>97.20</td><td>94.63</td></tr><tr><td>LEARNED</td><td>DELETE</td><td/><td>1.028</td><td/><td>97.30</td><td>94.61</td></tr><tr><td/><td/><td colspan=\"2\">Training Set C1000</td><td/><td/></tr><tr><td>LEARNED</td><td>DELETE</td><td colspan=\"2\">I Training Set C2000 1.026</td><td>I</td><td>97.18</td><td>94.68</td></tr><tr><td>LEARNED</td><td>CHOOSE</td><td/><td colspan=\"4\">1.028 I 97.24 I 94.60</td></tr><tr><td>LEARNED</td><td>DELETE</td><td/><td>1.025</td><td/><td colspan=\"2\">97.1394.71</td></tr></table>", |
| "text": "Number of choose and delete rules learned from training texts.", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF18": { |
| "content": "<table><tr><td>Disambiguation Stage</td><td colspan=\"3\">Ambiguity [ Recall (%)</td><td colspan=\"2\">Pre. ] (~)</td></tr><tr><td colspan=\"3\">Training Set ARK</td><td/><td/></tr><tr><td>LEARNED CHOOSE</td><td/><td>1.035</td><td>96.64</td><td>93.36</td></tr><tr><td>LEARNED DELETE</td><td/><td>1.029</td><td>96.40</td><td>93.71</td></tr><tr><td colspan=\"2\">~ainingSet LEARNED CHOOSE</td><td>C500 1.035</td><td>96.66</td><td>93.32</td><td>I</td></tr><tr><td>LEARNED DELETE</td><td/><td>1.029</td><td>96.40</td><td>93.66</td></tr><tr><td colspan=\"3\">Training Set C1000</td><td/><td/></tr><tr><td>LEARNED CHOOSE</td><td/><td>1.035</td><td>96.66</td><td>93.34</td></tr><tr><td>LEARNED DELETE</td><td/><td>1.029</td><td>96.42</td><td>93.64</td></tr><tr><td colspan=\"3\">Training Set C20O0</td><td/><td/></tr><tr><td>LEARNED CHOOSE</td><td/><td>1.034</td><td colspan=\"2\">96.64 ] 93.41</td></tr><tr><td>LEARNED DELETE</td><td/><td>1.030</td><td>96.52</td><td>93.70</td></tr></table>", |
| "text": "Disambiguation results at the sentence level using rules learned from C2000.", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| } |
| } |
| } |
| } |