ACL-OCL / Base_JSON /prefixA /json /A97 /A97-1017.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "A97-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:14:03.935413Z"
},
"title": "Probabilistic and Rule-Based Tagger of an Inflective Language-a Comparison",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Haji~",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Barbora",
"middle": [],
"last": "Hladk~i",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present results of probabilistic tagging of Czech texts in order to show how these techniques work for one of the highly morphologically ambiguous inflective languages. After description of the tag system used, we show the results of four experiments using a simple probabilistic model to tag Czech texts (unigram, two bigram experiments, and a trigram one). For comparison, we have applied the same code and settings to tag an English text (another four experiments) using the same size of training and test data in the experiments in order to avoid any doubt concerning the validity of the comparison. The experiments use the source channel model and maximum likelihood training on a Czech handtagged corpus and on tagged Wall Street Journal (WSJ) from the LDC collection. The experiments show (not surprisingly) that the more training data, the better is the success rate. The results also indicate that for inflective languages with 1000+ tags we have to develop a more sophisticated approach in order to get closer to an acceptable error rate. In order to compare two different approaches to text tagging-statistical and rule-based-we modified Eric Brill's rule-based part of speech tagger and carried out two more experiments on the Czech data, obtaining similar results in terms of the error rate. We have also run three more experiments with greatly reduced tagset to get another comparison based on similar tagset size.",
"pdf_parse": {
"paper_id": "A97-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "We present results of probabilistic tagging of Czech texts in order to show how these techniques work for one of the highly morphologically ambiguous inflective languages. After description of the tag system used, we show the results of four experiments using a simple probabilistic model to tag Czech texts (unigram, two bigram experiments, and a trigram one). For comparison, we have applied the same code and settings to tag an English text (another four experiments) using the same size of training and test data in the experiments in order to avoid any doubt concerning the validity of the comparison. The experiments use the source channel model and maximum likelihood training on a Czech handtagged corpus and on tagged Wall Street Journal (WSJ) from the LDC collection. The experiments show (not surprisingly) that the more training data, the better is the success rate. The results also indicate that for inflective languages with 1000+ tags we have to develop a more sophisticated approach in order to get closer to an acceptable error rate. In order to compare two different approaches to text tagging-statistical and rule-based-we modified Eric Brill's rule-based part of speech tagger and carried out two more experiments on the Czech data, obtaining similar results in terms of the error rate. We have also run three more experiments with greatly reduced tagset to get another comparison based on similar tagset size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Languages with rich inflection like Czech pose a special problem for morphological disambiguation (which is usually called tagging1). For example, the ending \"-u\" is not only highly ambiguous, but at the same time it carries complex information: it corresponds to the genitive, the dative and the locative singular for inanimate nouns, or the dative singular for animate nouns, or the accusative singular for feminine nouns, or the first person singular present tense active participle for certain verbs. There are two different techniques for text tagging: a stochastic technique and a rule-based technique. Each approach has some advantages --for stochastic techniques there exists a good theoretical framework, probabilities provide a straightforward way how to disambiguate tags for each word and probabilities can be acquired automatically from the data; for rule-based techniques the set of meaningful rules is automatically acquired and there exists an easy way how to find and implement improvements of the tagger. Small set of rules can be used, in contrast to the large statistical tables. Given the success of statistical methods in different areas, including text tagging, given the very positive results of English statistical taggers and given the fact that there existed no statistical tagger for any Slavic language we wanted to apply statistical methods even for the Czech language although it exhibits a rich inflection accompanied by a high degree of ambiguity. Originally, we expected that the result would be plain negative, getting no more than about two thirds of the tags correct. However, as we show below, we got better results than we had expected. We used the same statistical approach to tag both the English text and the Czech text. For English, we obtained results comparable with the results presented in (Merialdo, 1992) as well as in (Church, 1992) . For Czech, we obtained results which are less satisfactory than those for English. Given the comparability of the accuracy of the rule-based part-of-speech (POS) tagger (Brill, 1992) with the accuracy of the stochastic tag-IThe development of automatic tagging of Czech is/was supported fully or partially by the following grants/projects: Charles University GAUK 39/94, Grant Agency of the Czech Republic GACR 405/96/K214 and Ministry of Education VS96151. ger and given the fact that a rule-based POS tagger has never been used for a Slavic language we have tried to apply rule-based methods even for Czech.",
"cite_spans": [
{
"start": 1837,
"end": 1853,
"text": "(Merialdo, 1992)",
"ref_id": "BIBREF6"
},
{
"start": 1868,
"end": 1882,
"text": "(Church, 1992)",
"ref_id": "BIBREF3"
},
{
"start": 2054,
"end": 2067,
"text": "(Brill, 1992)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "2.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "STATISTICAL EXPERIMENTS",
"sec_num": "2"
},
{
"text": "Czech experiment is based upon ten basic POS classes and the tags describe the possible combinations of morphological categories for each POS class. In most cases, the first letter of the tag denotes the part-of-speech; the letters and numbers which follow it describe combinations of morphological categories (for a detailed description, see (Sgall, 1967) and into seven classes according to ease. Not all possible combinations of morphological categories are meaningful, however. In addition to these usual tags we have used special tags for sentence boundaries, punctuation and a so called \"unknown tag\". In the experiments, we used only those tags which occurred at least once in the training corpus. To illustrate the form of the tagged text, we present here the following examples from our training data, with comments:",
"cite_spans": [
{
"start": 343,
"end": 356,
"text": "(Sgall, 1967)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CZECH TAGSET",
"sec_num": "2.1.1"
},
{
"text": "word The corpus was originally hand-tagged, including the lemmatization and syntactic tags. We had to do some cleaning, which means that we have disregarded the lemmatization information and the syntactic tag, as we were interested in words and tags only. Tags used in this corpus were different from our suggested tags: number of morphological categories was higher in the original sample and the notation was also different. Thus we had to carry out conversions of the original data into the format presented above, which resulted in the so-called Czech \"modified\" corpus, with the following features: tokens 621 015 words 72 445 tags 1 171 average number of tags per token 3.65 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CZECH TAGSET",
"sec_num": "2.1.1"
},
{
"text": "For the tagging of English texts, we used the Penn Treebank tagset which contains 36 POS tags and 12 other tags (for punctuation and the currency symbol). A detailed description is available in (Santorini, 1990).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ENGLISH TAGSET",
"sec_num": "2.2.1"
},
{
"text": "For training in the English experiments, we used WSJ (Marcus et al., 1993) . We had to change the format of WSJ to prepare it for our tagging software. V~e used a small (100k tokens) part of WSJ in the experiment No. 6 and the complete corpus (1M tokens) in the experiments No. 5, No. 7 and No. 8. It is interesting to note the frequencies of the most ambiguous tokens encountered in the whole \"modified\" corpus and to compare them with the English data. Table 2.8 and Table 2 .9 contain the first tokens with the highest number of possible tags in the complete Czech \"modified\" corpus and in the complete WSJ.",
"cite_spans": [
{
"start": 53,
"end": 74,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 455,
"end": 476,
"text": "Table 2.8 and Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "ENGLISH TRAINING DATA",
"sec_num": null
},
{
"text": "Frequency #tags in train, data in train, data jejich 1 087 51 jeho 1 087 46 jeho~ 163 35 jejich~ 150 25 vedoucl 193 22 Table 2 .8",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 141,
"text": "1 087 51 jeho 1 087 46 jeho~ 163 35 jejich~ 150 25 vedoucl 193 22 Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Token",
"sec_num": null
},
{
"text": "In the Czech \"modified\" corpus, the token \"vedouc/\" appeared 193 times and was tagged by twenty two different tags: 13 tags for adjective and 9 tags for noun. The token \"vedoucf' means either: \"leading\" (adjective) or \"manager\" or \"boss\" (noun). The following columns represent the tags for the token \"vedouc/\" and their frequencies in the training data; for example \"vedoucf' was tagged twice as adjective, feminine, plural, nominative, first degree, affirmative. It is clear from these figures that the two languages in question have quite different properties and that nothing can be said without really going through an experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token",
"sec_num": null
},
{
"text": "We have used the basic source channel model (described e.g. in (Merialdo, 1992) ). The tagging procedure \u00a2 selects a sequence of tags T for the sentence W:",
"cite_spans": [
{
"start": 63,
"end": 79,
"text": "(Merialdo, 1992)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "THE ALGORITHM",
"sec_num": "2.4"
},
{
"text": "\u00a2 : PV --+ T . In this case the optimal tagging procedure is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE ALGORITHM",
"sec_num": "2.4"
},
{
"text": "\u00a2(W) --argmaxTPr(T[W) = : argmaxTPr(TlW) * Pr(W) = = argrnaxTPr(W,T) = --argmaxTPr(W[T) * Pr(T).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE ALGORITHM",
"sec_num": "2.4"
},
{
"text": "Our implementation is based on generating the (W,T) pairs by means of a probabilistic model using approximations of probability distributions Pr(WIT) and Pr(T). The Pr(T) is based on tag bigrams and trigrams, and Pr(WIT ) is approximated as the product of Pr(wi[tl). The parameters have been estimated by the usual maximum likelihood training method, i.e. we approximated them as the relative frequencies found in the training data with smoothing based on estimated unigram probability and uniform distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE ALGORITHM",
"sec_num": "2.4"
},
{
"text": "The results of the Czech experiments are displayed in Table 2 Table 2 .11b",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 2",
"ref_id": "TABREF0"
},
{
"start": 62,
"end": 69,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "THE RESULTS",
"sec_num": "2.5"
},
{
"text": "A 32 0 0 0 6 3 C 0 4 0 0 1 0 F 0 0 0 0 0 0 K 0 0 0 0 0 0 N 4 0 0 0 64 8 O 0 0 0 0 1 0 P 0 0 0 0 0 3 R 0 0 0 0 1 1 S 0 0 0 0 0 0 V 0 0 0 0 3 8 T 0 0 0 0 1 0 X 0 0 0 0 0 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "[ [[A IC [F ]K IN lO",
"sec_num": null
},
{
"text": "The letters in the first column and row denote POS classes, the interpunction (T) and the \"unknown tag\" (X). The numbers show how many times the tagger assigned an incorrect POS tag to a token in the test file. The total number of errors was 244. Altogether, fifty times the adjectives (A) were tagged incorrectly, nouns (N) 93 times, numbers (C) 5 times and etc. (see the last unmarked column in Table 2 .11b); to provide a better insight, we should add that in 32 cases, when the adjective was correctly tagged as an adjective, but the mistakes appeared in the assignment of morphological categories (see Table 2 .12), 6 times the adjective was tagged as a noun, twice as a pronoun, 3 times as an adverb and so on (see the second row in ",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 404,
"text": "Table 2",
"ref_id": "TABREF0"
},
{
"start": 607,
"end": 614,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "I] P [ R I s I V I T I X I",
"sec_num": null
},
{
"text": "] 64 [[ 11 [ 5 [ 41 [ 2 [ 4 [ 1 ] (Schiller, 1996) describes the general architecture of the tool for noun phrase mark-up based on finitestate techniques and statistical part-of-speech disambiguation for seven European languages. For Czech, we created a prototype of the first step of this process --the part-of-speech (POS) tagger -using Rank Xerox tools (Tapanainen, 1995) , (Cutting et al., 1992 ).",
"cite_spans": [
{
"start": 5,
"end": 33,
"text": "[[ 11 [ 5 [ 41 [ 2 [ 4 [ 1 ]",
"ref_id": null
},
{
"start": 34,
"end": 50,
"text": "(Schiller, 1996)",
"ref_id": "BIBREF8"
},
{
"start": 356,
"end": 374,
"text": "(Tapanainen, 1995)",
"ref_id": null
},
{
"start": 377,
"end": 398,
"text": "(Cutting et al., 1992",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N l[ g In t c I g&c [ n&c I ->NZ",
"sec_num": null
},
{
"text": "The first step of POS tagging is obviously a definition of the POS tags. We performed three ex-2We used a speciM tag XX for unknown words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS TAGSET",
"sec_num": "2.6.1"
},
{
"text": "periments. These experiments differ in the POS tagset. During the first experiment we designed tagset which contains 47 tags. The POS tagset can be described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS TAGSET",
"sec_num": "2.6.1"
},
{
"text": "Category Symbol Table 2 .20",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "POS TAGSET",
"sec_num": "2.6.1"
},
{
"text": "The results show that the more radical reduction of Czech tags (from 1171 to 34) the higher accuracy of the results and the more comparable are the Czech and English results. However, the difference in the error rate is still more than visible --here we can speculate that the reason is that Czech is \"free\" word order language, whereas English is not. Table 2 .19",
"cite_spans": [],
"ref_spans": [
{
"start": 353,
"end": 360,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "POS TAGSET",
"sec_num": "2.6.1"
},
{
"text": "The analysis of the results of the first experiment showed very high ambiguity between the nominative and accusative cases of nouns, adjectives, pronouns and numerals. That is why we replaced the tags for nominative and accusative of nouns, adjectives, pronouns and numerals by new tags NOUNANA, ADJANA, PRONANA and NUMANA (meaning nominative or accusative, undistinguished). The rest of the tags stayed unchanged. This led 43 POS tags. In the third experiment we deleted the morphological information for nouns and adjectives alltogether. This process resulted in the final 34 POS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS TAGSET",
"sec_num": "2.6.1"
},
{
"text": "A simple rule-based part of speech (RBPOS) tagger is introduced in (Brill, 1992) . The accuracy of this tagger for English is comparable to a stochastic English POS tagger. From our point of view, it is very interesting to compare the results of Czech stochastic POS (SPOS) tagger and a modified RB-POS tagger for Czech.",
"cite_spans": [
{
"start": 67,
"end": 80,
"text": "(Brill, 1992)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A RULE-BASED EXPERIMENT FOR CZECH",
"sec_num": "3"
},
{
"text": "We used the same corpus used in the case of the SPOS tagger for Czech. RBPOS requires different input format; we thus converted the whole corpus into this format, preserving the original contents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAINING DATA",
"sec_num": "3.1"
},
{
"text": "It is an obvious fact that the Czech tagset is totally different from the English tagset. Therefore, we had to modify the method for the initial guess. For Czech the algorithm is: \"If the word is W_SB (sentence boundary) assign the tag T_SB, otherwise assign the tag NNSI.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEARNING",
"sec_num": "3.2"
},
{
"text": "The first stage of training is learning rules to predict the most likely tag for unknown words. These rules operate on word types; for example, if 3The percentage of ambiguous word forms in the test file.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEARNING RULES TO PREDICT THE MOST LIKELY TAG FOR UNKNOWN WORDS",
"sec_num": "3.2.1"
},
{
"text": "token will increase after a morphological analyzer is added. Success should be guaranteed, however, by certain tagset reductions, as the original tagset (even after the reductions mentioned above) is still too detailed. This is especially true when comparing it to English, where some tags represent, in fact, a set of tags to be discriminated later (if ever). For example, the tag VB used in the WSJ corpus actually means \"one of the (five different) tags for 1st person sg., 2nd person sg., 1st person pl., etc.\". First, we will reduce the tagset to correspond to our morphological analyzer which already uses a reduced one. Then, the tagset will be reduced even further, but nevertheless, not as much as we did for the Xeroxtools-based experiment, because that tagset is too \"rough\" for many applications, even though the results are good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEARNING RULES TO PREDICT THE MOST LIKELY TAG FOR UNKNOWN WORDS",
"sec_num": "3.2.1"
},
{
"text": "Regarding tagset reduction, we should note that we haven't performed a \"combined\" experiment, i.e. using the full (1100+) tagset for (thus) \"intermediate\" tagging, but only the reduced tagset for the final results. However, it can be quite simply derived from the tables 2.10, 2.11a and 2.11b, that the error rate would not drop much: it will remain high at about 6.5070 (based on the results of experiment No. 4) using the very small tagset of 12 (= number or lines in table 2.11a) tags used for part of speech identification. This is even much higher than the error rate reported here for the smallest tagset used in the 'pure' experiment (sect. 2.6, table 2.20), which was at 3.8~0. This suggests that maybe the pure methods (which are obviously also simple to implement) are in general better than the \"combined\" methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEARNING RULES TO PREDICT THE MOST LIKELY TAG FOR UNKNOWN WORDS",
"sec_num": "3.2.1"
},
{
"text": "Another possibility of an improvement is to add more data to allow for more reliable trigram estimates. We will also add contemporary newspaper texts to our training data in order to account for recent language development. Hedging against failure of all these simple improvements, we are also working on a different model using independent predictions for certain grammatical categories (and the lemma itself), but the final shape of the model has not yet been determined. This would mean to introduce constraints on possible combinations of morphological categories and take them into account when \"assembling\" the final tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEARNING RULES TO PREDICT THE MOST LIKELY TAG FOR UNKNOWN WORDS",
"sec_num": "3.2.1"
}
],
"back_matter": [
{
"text": "a word ends by \"d37;, it is probably a masculine adjective. To compare the influence of the size of the training files on the accuracy of the tagger we performed two subexperiments4: Table 3 .1We present here an example of rules taken from LEXRULEOUTFILE from the exp. No. 1: u hassuf 1 NIS2 # change the tag to NIS2 if the suffix is \"u\" y hassuf 1 NFS2 # change the tag to NFS2 if the suffix is \"y\" ho hassuf 2 AIS21A # change the tag to AIS21A if the suffix is \"ho\" \u00a3ch hassuf 3 NFP6 # change the tag to NFP6 if the suffix is \"\u00a3ch\" nej addpref 3 O2A # change the tag to O2A if adding the prefix \"nej\" results in a word",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 190,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "The second stage of training is learning rules to improve tagging accuracy based on contextual cues. These rules operate on individual word tokens.4We use the same names of files and variables as Eric Brill in the rule-based POS tagger's documentation. TAGGED-CORPUS --manually tagged training corpus, UNTAGGED-CORPUS --collection of all untagged texts, LEXRULEOUTFILE --the list of transformations to determine the most likely tag for unknown words, TAGGED-CORPUS-2 --manually tagged training corpus, TAGGED-CORPUS-ENTIRE --Czech \"modified\" corpus (the entire manually tagged corpus), CONTEXT-RULEFILE --the list of transformations to improve accuracy based on contextual cues. Table 3 . 3",
"cite_spans": [],
"ref_spans": [
{
"start": 679,
"end": 686,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "LEARNING CONTEXTUAL CUES",
"sec_num": "3.2.2"
},
{
"text": "The results, though they might seem negative compared to English, are still better than our original expectations. Before trying some completely different approach, we would like to improve the current simple approach by some other simple measures: adding a morphological analyzer (Hajji, 1994) as a frontend to the tagger (serving as a \"supplier\" of possible tags, instead of just taking all tags occurring in the training data for a given token), simplifying the tagset, adding more data. However, the desired positive effect of some of these measures is not guaranteed: for example, the average number of tags per",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "4"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Corpus Based Approach To Language Learning",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill. 1993. A Corpus Based Approach To Lan- guage Learning. PhD Dissertation, Department of Computer and Information Science, Univer- sity of Pennsylvania.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Some Advances in Transformation--Based Part of Speech Tagging",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Twelfth National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill. 1994. Some Advances in Transformation- -Based Part of Speech Tagging. In: Proceedings of the Twelfth National Conference on Artificial Intelligence.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unification Morphology Grammar. PhD Dissertation, Institute of Formal and Applied Linguistics",
"authors": [],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji~. 1994. Unification Morphology Gram- mar. PhD Dissertation, Institute of Formal and Applied Linguistics, Charles University, Prague, Czech Republic.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Current Practice In Part Of Speech Tagging And Suggestions For The Future. For Henry Ku~era",
"authors": [
{
"first": "Kenneth",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
}
],
"year": 1992,
"venue": "Studies in Slavic Philology and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth W. Church. 1992. Current Practice In Part Of Speech Tagging And Suggestions For The Future. For Henry Ku~era, Studies in Slavic Philology and Computational Linguistics, Michi- gan Slavic Publications, Ann Arbor.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pedersen and Penelope Sibun 1992. A Practical Part-of-Speech Tagger",
"authors": [
{
"first": "Doug",
"middle": [],
"last": "Cutting",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Kupiec",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Third Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doug Cutting, Julian Kupiec, Jan Pedersen and Penelope Sibun 1992. A Practical Part-of- Speech Tagger. In: Proceedings of the Third Conference on Applied Natural Language Pro- cessing , Trento, Italy.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Building A Large Annotated Corpus Of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary-Ann",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary- Ann Marcinkiewicz 1993. Building A Large Annotated Corpus Of English: The Penn Tree- bank. Computational Linguistics, 19(2):313-- 330.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Tagging Text With A Probabilistie Model",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Merialdo",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "2",
"pages": "155--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Merialdo. 1992. Tagging Text With A Probabilistie Model. Computational Linguis- tics, 20(2):155--171",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Part Of Speech Tagging Guidelines For The Penn Treebank Project",
"authors": [
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beatrice Santorini. 1990. Part Of Speech Tag- ging Guidelines For The Penn Treebank Project. Technical report MS-CIS-90-47, Department of Computer and Information Science, University of Pennsylvania.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multilingual Finite-State Noun Phrase Extraction. ECAI'96",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Schiller",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Schiller. 1996. Multilingual Finite-State Noun Phrase Extraction. ECAI'96, Budapest, Hun- gary.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The Generative Description of a Language and the Czech Declension",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Sgall",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petr Sgall. 1967. The Generative Description of a Language and the Czech Declension (In Czech). Studie a prdce lingvistickd, 6. Prague.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "RXRC Finite-State Compiler",
"authors": [
{
"first": "Pasi",
"middle": [],
"last": "Tapanalnen",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pasi Tapanalnen. 1995. RXRC Finite-State Com- piler. Technical Report MLTT-20, Rank Xerox Research Center, Meylen, France.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Simple Rule-Based Part of Speech Tagger",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Third Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill. 1992. A Simple Rule-Based Part of Speech Tagger. In: Proceedings of the Third Conference on Applied Natural Language Pro- cessing, Trento, Italy.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td>.1 and Table</td></tr><tr><td>2.2).</td><td/><td/><td/></tr><tr><td>Morph.</td><td>Cat.</td><td colspan=\"2\">Poss. Description</td></tr><tr><td>Categ.</td><td>Var.</td><td>Val.</td><td/></tr><tr><td/><td>see</td><td/><td/></tr><tr><td/><td>Tab.</td><td/><td/></tr><tr><td/><td>2.2)</td><td/><td/></tr><tr><td>gender</td><td>g</td><td>M</td><td>masc. anim.</td></tr><tr><td/><td/><td>I</td><td>masc. inanim.</td></tr><tr><td/><td/><td>N</td><td>neuter</td></tr><tr><td/><td/><td>F</td><td>feminine</td></tr><tr><td>number</td><td>n</td><td>S</td><td>singular</td></tr><tr><td/><td/><td>P</td><td>plural</td></tr><tr><td>tense</td><td>t</td><td>M</td><td>past</td></tr><tr><td/><td/><td>P</td><td>present</td></tr><tr><td/><td/><td>F</td><td>future</td></tr><tr><td>mood</td><td>m</td><td>O</td><td>indicative</td></tr><tr><td/><td/><td>R</td><td>imperative</td></tr><tr><td>case</td><td>c</td><td>1</td><td>nominative</td></tr><tr><td/><td/><td>2</td><td>genitive</td></tr><tr><td/><td/><td>3</td><td>dative</td></tr><tr><td/><td/><td>4</td><td>accusative</td></tr><tr><td/><td/><td>5</td><td>vocative</td></tr><tr><td/><td/><td>6</td><td>locative</td></tr><tr><td/><td/><td>7</td><td>instrumental</td></tr><tr><td>voice</td><td>s</td><td>A</td><td>active voice</td></tr><tr><td/><td/><td>P</td><td>passive voice</td></tr><tr><td>polarity</td><td>a</td><td>N</td><td>negative</td></tr><tr><td/><td/><td>A</td><td>affirmative</td></tr><tr><td>deg. of comp.</td><td>d</td><td>1</td><td>base form</td></tr><tr><td/><td/><td>2</td><td>comparative</td></tr><tr><td/><td/><td>3</td><td>superlative</td></tr><tr><td>person</td><td>p</td><td>1</td><td>1st</td></tr><tr><td/><td/><td>2</td><td>2nd</td></tr><tr><td/><td/><td>3</td><td>3rd</td></tr><tr><td/><td colspan=\"2\">Table 2.1</td><td/></tr><tr><td colspan=\"4\">Note especially, that Czech nouns are divided</td></tr><tr><td colspan=\"4\">into four classes according to gender</td></tr></table>",
"html": null,
"num": null
},
"TABREF2": {
"text": "we used the corpus collected during the 1960's and 1970's in the Institute for Czech Language at the Czechoslovak Academy of Sciences.",
"type_str": "table",
"content": "<table><tr><td>2.1.2</td><td>CZECH TRAINING</td><td>DATA</td></tr><tr><td colspan=\"2\">For training,</td><td/></tr><tr><td/><td/><td>Itag</td><td>#comments</td></tr><tr><td/><td/><td>doIRdo</td><td>#\"to\"</td></tr><tr><td/><td/><td/><td>(prepositions have their</td></tr><tr><td/><td/><td/><td>own individuals tags)</td></tr><tr><td/><td/><td>oddflulNIS2</td><td>#\"unit\"</td></tr><tr><td/><td/><td/><td>(noun, masculine inani-</td></tr><tr><td/><td/><td/><td>mate, singular, genitive)</td></tr><tr><td/><td/><td>kiRk</td><td>~:\" for\"</td></tr><tr><td/><td/><td/><td>(preposition)</td></tr><tr><td/><td/><td>snfdanilNFS3</td><td>~\" breakfast\"</td></tr><tr><td/><td/><td/><td>(noun, feminine, singular,</td></tr><tr><td/><td/><td/><td>dative)</td></tr><tr><td/><td/><td>pou#,ijeIV3SAPOMA</td><td>~\" uses\"</td></tr><tr><td/><td/><td/><td>(verb, 3rd person, singular,</td></tr><tr><td/><td/><td/><td>active,</td></tr><tr><td/><td/><td/><td>present, indicative, masc.</td></tr><tr><td/><td/><td/><td>animate, affirmative)</td></tr><tr><td/><td/><td>prolRpro</td><td>#\"for\"</td></tr><tr><td/><td/><td/><td>(preposition)</td></tr><tr><td/><td/><td>n\u00a3s[PP1P4</td><td>~\" US\"</td></tr><tr><td/><td/><td/><td>(pronoun, personal, 1st</td></tr><tr><td/><td/><td/><td>person, plural, accusative)</td></tr></table>",
"html": null,
"num": null
},
"TABREF3": {
"text": "V~Te used the complete \"modified\" corpus (621015 tokens) in the experiments No. 1, No. 3, No. 4 and a small part of this corpus in the experiment No. 2, as indicated inTable 2.4.",
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">tokens</td><td>110 874</td></tr><tr><td colspan=\"2\">words</td><td>22 530</td></tr><tr><td>tags</td><td/><td>882</td></tr><tr><td colspan=\"2\">average number of tags per token</td><td>2.36</td></tr><tr><td/><td>Table 2.4</td></tr><tr><td>2.2</td><td>ENGLISH EXPERIMENTS</td></tr></table>",
"html": null,
"num": null
},
"TABREF4": {
"text": ".5 contains the basic characteristics of the training data.",
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"3\">Experiment Experiments</td></tr><tr><td/><td/><td colspan=\"2\">No. 6</td><td>No. 5, No. 7,</td></tr><tr><td/><td/><td/><td/><td>No. 8</td></tr><tr><td colspan=\"2\">tokens</td><td/><td>110 530</td><td>1 287 749</td></tr><tr><td colspan=\"2\">words</td><td/><td>13 582</td><td>51 433</td></tr><tr><td>tags</td><td/><td/><td>45</td><td>45</td></tr><tr><td colspan=\"3\">average number</td><td>1.72</td><td>2.34</td></tr><tr><td colspan=\"3\">of tags per token</td><td/></tr><tr><td/><td/><td colspan=\"2\">Table 2.5</td></tr><tr><td>2.3</td><td colspan=\"3\">CZECH VS ENGLISH</td></tr><tr><td colspan=\"5\">Differences between Czech as a morphologically am-</td></tr><tr><td colspan=\"5\">biguous inflective language and English as language</td></tr><tr><td colspan=\"5\">with poor inflection are also reflected in the number</td></tr><tr><td colspan=\"5\">of tag bigrams and tag trigrams. The figures given</td></tr><tr><td colspan=\"5\">in Table 2.6 and 2.7 were obtained from the training</td></tr><tr><td>files.</td><td/><td/><td/></tr><tr><td/><td/><td>Czech</td><td/><td>WSJ</td></tr><tr><td/><td/><td>corpus</td><td/></tr><tr><td colspan=\"2\">x&lt;=4</td><td colspan=\"2\">24 064 x&lt;--10</td><td>459</td></tr><tr><td colspan=\"2\">4&lt;x&lt;=16</td><td colspan=\"3\">5 577 10&lt;x&lt;--100</td><td>411</td></tr><tr><td colspan=\"2\">16&lt;x&lt;=64</td><td colspan=\"3\">2 706 100&lt;x&lt;=1000</td><td>358</td></tr><tr><td colspan=\"2\">x&gt;64</td><td colspan=\"2\">1 581 x&gt;1000</td><td>225</td></tr><tr><td colspan=\"2\">bigrams</td><td colspan=\"2\">33 928 bigrams</td><td>1 453</td></tr><tr><td colspan=\"5\">Table 2.6 Number of bigrams with frequency x</td></tr><tr><td/><td/><td>Czech</td><td/><td>WSJ</td></tr><tr><td/><td/><td>corpus</td><td/></tr><tr><td colspan=\"2\">x&lt;----4</td><td>155 399</td><td>x&lt;=lO</td><td>11 810</td></tr><tr><td colspan=\"2\">4&lt;x&lt;=16</td><td>16 371</td><td colspan=\"2\">10&lt;x&lt;=100</td><td>4 571</td></tr><tr><td colspan=\"2\">16&lt;x&lt;=64</td><td colspan=\"3\">4 380 100&lt;x&lt;=1000</td><td>1 645</td></tr><tr><td colspan=\"2\">x&gt;64</td><td colspan=\"2\">933 x&gt; 1000</td><td>231</td></tr><tr><td colspan=\"2\">trigrams</td><td colspan=\"2\">177 083 trigrams</td><td>18 257</td></tr><tr><td colspan=\"5\">Table 2.7 Number of trigrams with frequency x</td></tr></table>",
"html": null,
"num": null
},
"TABREF7": {
"text": "",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF8": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>.11a). A</td></tr></table>",
"html": null,
"num": null
},
"TABREF9": {
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td/><td>.13</td><td/><td/></tr><tr><td colspan=\"2\">Cllg c</td><td/><td/><td/></tr><tr><td colspan=\"2\">4 [[1 3</td><td/><td/><td/></tr><tr><td/><td/><td>Table 2.14</td><td/><td/></tr><tr><td colspan=\"3\">P Ilg c g&amp;clVD-&gt;PV</td><td/><td/></tr><tr><td colspan=\"2\">19ll8 7 3</td><td>I 1</td><td/><td/></tr><tr><td/><td/><td>Table 2.15</td><td/><td/></tr><tr><td colspan=\"4\">V I P t n Is I n&amp;t I p&amp;t t&amp;a I</td><td/></tr><tr><td>22]3</td><td colspan=\"2\">6 5151 1 I 1</td><td>1 I</td><td/></tr><tr><td/><td/><td>Table 2.16a</td><td/><td/></tr><tr><td colspan=\"3\">v II gt~a I pan~t I v-&gt;VT</td><td/><td/></tr><tr><td>6 II 1</td><td>I1</td><td>]4</td><td/><td/></tr><tr><td/><td/><td>Table 2.16b</td><td/><td/></tr><tr><td colspan=\"5\">The results of our experiments with English are</td></tr><tr><td colspan=\"3\">displayed in Table 2.17.</td><td/><td/></tr><tr><td/><td>INo5</td><td>No. 6</td><td>INo. 7</td><td>No. 8</td></tr><tr><td>test data</td><td>1 294</td><td>1 294</td><td>1 294</td><td>1 294</td></tr><tr><td>(tokens)</td><td/><td/><td/><td/></tr><tr><td>prob.</td><td colspan=\"4\">unigram bigram bigram trigram</td></tr><tr><td>model</td><td/><td/><td/><td/></tr><tr><td>incorrect</td><td>136</td><td>81</td><td>41</td><td>37</td></tr><tr><td>tags</td><td/><td/><td/><td/></tr><tr><td>tagging</td><td>89.5%</td><td>93.74%</td><td>96.83%</td><td>97.14%</td></tr><tr><td>accuracy</td><td/><td/><td/><td/></tr><tr><td/><td/><td>Table 2.17</td><td/><td/></tr><tr><td colspan=\"5\">To illustrate the results of our tagging experi-</td></tr><tr><td colspan=\"5\">ments, we present here short examples taken from</td></tr></table>",
"html": null,
"num": null
},
"TABREF10": {
"text": "Figures representing the results of all experiments are presented in the following table. We have also included the results of English tagging using the same Xerox tools.",
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td>2.6.2</td><td colspan=\"2\">RESULTS</td><td/></tr><tr><td/><td/><td>Pos.</td><td>Description</td><td/><td/><td/><td/></tr><tr><td>case</td><td>c</td><td>Value NOM</td><td>nominative</td><td colspan=\"2\">language</td><td>tags</td><td>ambiguity ~</td><td>tagging accuracy</td></tr><tr><td/><td/><td>GEN bAT ACC VOC LOC</td><td>genitive dative accusative vocative locative</td><td colspan=\"3\">Czech Czech Czech English _[ 76 47 43 34</td><td>39% 36% 14% 36%</td><td>91.7% 93.0% 96.2% 97.8%</td></tr><tr><td/><td/><td>INS</td><td>instrumental</td><td/><td/><td/><td/></tr><tr><td/><td/><td>INV</td><td>invariant</td><td/><td/><td/><td/></tr><tr><td>kind</td><td>Nm</td><td>PAP</td><td>past</td><td/><td/><td/><td/></tr><tr><td>verb</td><td/><td/><td>paticiple</td><td/><td/><td/><td/></tr><tr><td/><td/><td>PRI</td><td>present</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>participle</td><td/><td/><td/><td/></tr><tr><td/><td/><td>INF</td><td>infinitive</td><td/><td/><td/><td/></tr><tr><td/><td/><td>IMP</td><td>imperative</td><td/><td/><td/><td/></tr><tr><td/><td/><td>TRA</td><td>transgressive</td><td/><td/><td/><td/></tr></table>",
"html": null,
"num": null
},
"TABREF11": {
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td>.18</td></tr><tr><td>POS tag</td><td>Description</td></tr><tr><td>NOUN_c</td><td>nouns + case</td></tr><tr><td>ADJ_c</td><td>adjectives + case</td></tr><tr><td>PRON_c</td><td>pronouns + case</td></tr><tr><td>NUM_c</td><td>numerals + case</td></tr><tr><td>VERB_k</td><td>verbs + kind of verb</td></tr><tr><td>ADV</td><td>adverbs</td></tr><tr><td>PROP</td><td>proper names</td></tr><tr><td>PREP</td><td>prepositions</td></tr><tr><td>PSE</td><td>reflexive particles \"se\"</td></tr><tr><td>CLIT</td><td>clitics</td></tr><tr><td>CONJ</td><td>conjunctions</td></tr><tr><td>INTJ</td><td>interjections</td></tr><tr><td>PTCL</td><td>particles</td></tr><tr><td>DATE</td><td>dates</td></tr><tr><td>CM</td><td>comma</td></tr><tr><td>PUNCT</td><td>interpunction</td></tr><tr><td>SENT</td><td>sentence bundaries</td></tr></table>",
"html": null,
"num": null
}
}
}
}