| { |
| "paper_id": "U03-1015", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:11:33.476229Z" |
| }, |
| "title": "The Importance of High Quality Input for WSD: An Application-Oriented Comparison of Part-of-Speech Taggers", |
| "authors": [ |
| { |
| "first": "Tanja", |
| "middle": [], |
| "last": "Gaustad", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Humanities Computing University of Groningen", |
| "location": { |
| "postBox": "P.O. Box 716", |
| "postCode": "9700 AS", |
| "settlement": "Groningen", |
| "country": "The Netherlands" |
| } |
| }, |
| "email": "t.gaustad@let.rug.nl" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper, we present an applicationoriented evaluation of three Part-of-Speech (PoS) taggers in a word sense disambiguation (WSD) system. Following the intuition that high quality input is likely to influence the final results of a complex system, we test whether the more accurate taggers also produce better results when integrated into the WSD system. For this purpose, a stand-alone evaluation of the PoS taggers is used to assess which tagger is the most accurate. The results of the WSD task, computed on the training section of the Dutch Senseval-2 data, including the PoS information from all three taggers show that the most accurate PoS tags do indeed lead to the best results, thereby verifying our hypothesis. A surprising result, however, is the fact that the performance of the complex WSD system with the different PoS tags included does not necessarily reflect the stand-alone accuracy of the PoS taggers.", |
| "pdf_parse": { |
| "paper_id": "U03-1015", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper, we present an applicationoriented evaluation of three Part-of-Speech (PoS) taggers in a word sense disambiguation (WSD) system. Following the intuition that high quality input is likely to influence the final results of a complex system, we test whether the more accurate taggers also produce better results when integrated into the WSD system. For this purpose, a stand-alone evaluation of the PoS taggers is used to assess which tagger is the most accurate. The results of the WSD task, computed on the training section of the Dutch Senseval-2 data, including the PoS information from all three taggers show that the most accurate PoS tags do indeed lead to the best results, thereby verifying our hypothesis. A surprising result, however, is the fact that the performance of the complex WSD system with the different PoS tags included does not necessarily reflect the stand-alone accuracy of the PoS taggers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Certain NLP tools are typically used as a subcomponent or a pre-processor in a more complex system, rather than as a complete application in their own right. A typical example of such tools are Partof-Speech (PoS) taggers. What is usually not taken into account is the fact that the quality (in terms of accuracy) of each subpart of a complex system is likely to influence the final results considerably. Lately, standardized evaluation of NLP resources has gained more importance in the field of Computational Linguistics (e.g. CLEF workshops in information retrieval, Parseval, Senseval), but a tendency towards more application-oriented evaluation is only beginning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we will proceed to an applicationoriented comparison of three PoS taggers in a word sense disambiguation (WSD) system. We will evaluate to what extent differences in stand-alone PoS accuracy influence the results obtained in the complex WSD system using the acquired PoS information. Since the Dutch data we use is not only ambiguous with regard to meaning but also with regard to PoS, accurate PoS information is very important to achieve high disambiguation accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The paper is structured as follows: We will start with a detailed description and comparison of the three PoS taggers including a stand-alone evaluation in order to compare their performance independently of the application to the WSD task. Then follows a description of the WSD system in which (the output of) the different PoS taggers will be incorporated and tested. This includes a presentation of the machine learning algorithm employed for classification (maximum entropy) and its application to WSD, as well as a note on the data and the settings used for the reported experiments. Next, the applicationdependent results of the three PoS taggers will be presented and discussed. We end the paper with conclusions and some ideas for future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The PoS taggers we compare in this article are: a Hidden Markov Model tagger (section 2.1), a Memory-Based tagger (section 2.2), a transformation-based tagger (section 2.3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison of Part-of-Speech Taggers", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We chose these three taggers because they were readily available, could easily be trained for Dutch without major changes in the architecture, and represent distinct, widely used types of existing PoS taggers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison of Part-of-Speech Taggers", |
| "sec_num": "2" |
| }, |
| { |
| "text": "All three taggers were trained on the Dutch Eindhoven corpus (uit den Boogaart, 1975 ) using the WOTAN tag set (Berghmans, 1994) . The original WOTAN tag set, consisting of 233 tags, was too detailed for our purpose. Instead, we used the limited WOTAN tag set of 48 PoS tags developed by (Drenth, 1997) for training and testing in the standalone comparison.", |
| "cite_spans": [ |
| { |
| "start": 61, |
| "end": 84, |
| "text": "(uit den Boogaart, 1975", |
| "ref_id": null |
| }, |
| { |
| "start": 111, |
| "end": 128, |
| "text": "(Berghmans, 1994)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 288, |
| "end": 302, |
| "text": "(Drenth, 1997)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison of Part-of-Speech Taggers", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In the context of our WSD application, however, we are only interested in the main PoS categories. Therefore, we discarded all additional information from the assigned PoS tags in the WSD corpus. This resulted in 12 different tags being kept: Adj (adjective), Adv (adverb), Art (article), Conj (conjunction), Int (interjection), Misc (miscellaneous), N (noun), Num (numeral), Prep (preposition), Pron (pronoun), Punc (punctuation), and V (verb). 1 For the stand-alone results, 80% of the training data was actually used for training, 10% for tuning (setting of parameters, etc.) and the accuracy was computed on the remaining 10%. Note that the results of the stand-alone comparison solely serve to illustrate the difference in performance observed independently of an application in order to be able to assess the added value of a more accurate PoS tagger in the WSD application.", |
| "cite_spans": [ |
| { |
| "start": 446, |
| "end": 447, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison of Part-of-Speech Taggers", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The first PoS tagger we used is the trigram Hidden Markov Model (HMM) tagger (Prins and van Noord, 2003) developed in the context of 'Alpino', a natural language understanding system for Dutch van der Beek et al., 2002). 2 In this standard trigram HMM, each state corresponds to the previous two PoS tags and the probabilities are directly estimated from the labeled training corpus (Manning and Sch\u00fctze, 1999) . There are two types of probabilities relevant in this model, the probability of a tag given the preceding two tags", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 104, |
| "text": "(Prins and van Noord, 2003)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 193, |
| "end": 222, |
| "text": "van der Beek et al., 2002). 2", |
| "ref_id": null |
| }, |
| { |
| "start": 383, |
| "end": 410, |
| "text": "(Manning and Sch\u00fctze, 1999)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a7 \u00a6 \u00a9\u00a4 \u00a7 \u00a6 \u00a5 \u00a4 \u00a6 \u00a5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "as well as the probability of a word given its tag", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u00a1 \u00a3 \u00a2 \u00a5 \u00a6 !\u00a4 \u00a6 \"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": ". These probabilities are computed for each tag individually. Training the HMM with the forwardbackward algorithm, we can calculate", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u00a1 \u00a3 \u00a2 \u00a5 \u00a4 ! \u00a6 $ # % \u00a4 ! for all potential tags: \u00a1 \u00a3 \u00a2 \u00a5 \u00a4\u00a6 # % \u00a4 \u00a9 & # % ' \u00a6\u00a2 \u00a5 \u00a4 \u00a9 ( \u00a6\u00a2 \u00a5 \u00a4 ! where ' ) \u00a6 \u00a9 \u00a2 \u00a5 \u00a4 \u00a9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "is the total (summed) probability of all paths through the model that end at tag \u00a4 at position 0 , and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "( \u00a6 \u00a9 \u00a2 \u00a5 \u00a4 \u00a9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "is the total probability of all paths starting at tag ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a9 \u00a6 1 # 2 \u00a4 \u00a9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": ", unlikely tags are removed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Smoothing of the trigram probabilities is achieved through a variant of linear interpolation (Collins, 1999) where lower order models are also taken into account and weights are assigned to each of the models to capture their relative importance.", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 108, |
| "text": "(Collins, 1999)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Since the tagger's lexicon has been created from the training data, the test data very likely contains unknown words which means that no initial set of possible tags can be assigned to these words. Two different strategies have been incorporated in the HMM tagger used here. First, a heuristic rule for recognizing names has been added which assigns an N tag to all capitalized words. 3 Second, a set of automata (also created on the basis of the training data) is used to find possible tags based on the suffixes of unknown words (Daciuk, 2000).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hidden Markov Model PoS Tagger", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The second tagger we have used in the experiments reported here is the Memory-Based Tagger (MBT) (Daelemans et al., 2002a) . 4 It is a PoS tagger based on Memory-Based Learning, an extension of the 3 -Nearest-Neighbour approach, which has proved to be successful for a number of languages and NLP applications (Zavrel and Daelemans, 1999; Veenstra et al., 2000; Hoste et al., 2002) .", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 122, |
| "text": "(Daelemans et al., 2002a)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 125, |
| "end": 126, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 310, |
| "end": 338, |
| "text": "(Zavrel and Daelemans, 1999;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 339, |
| "end": 361, |
| "text": "Veenstra et al., 2000;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 362, |
| "end": 381, |
| "text": "Hoste et al., 2002)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Memory-Based PoS Tagger", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "MBT consists of two components: a memorybased learning component and a performance component for similarity-based classification. During classification, the similarity between a previously unseen test example and the examples in memory is computed using a similarity metric. The category of the test example is then extrapolated based on the most similar example(s).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Memory-Based PoS Tagger", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Given an annotated corpus, three data structures are automatically extracted: a lexicon, a case base for known words, and a case base for unknown words. During tagging, each word is looked up in the lexicon and, if it is found, its lexical representation is retrieved and its context determined. The resulting pattern is disambiguated using extrapolation from the nearest neighbours in the known words case base. If a word is not present in the lexicon, its lexical representation is computed on the basis of its form, its context is determined, and the resulting pattern is disambiguated using extrapolation from nearest neighbours in the unknown words case base. In both cases, the output is a best guess of the category for the word in its current context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Memory-Based PoS Tagger", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "For the known words, the preceding two tags and words as well as the ambiguous tag and word to the right of the current position have been used to construct the known words case base. Classification was achieved using the IGTREE algorithm with one nearest neighbour. For unknown words, the preceding tag, the ambiguous tag to the right, as well as the first and the last three letters of the ambiguous word itself were taken into account to construct the unknown words case base. For classification, the IB1 algorithm with 9 nearest neighbours was used. In both cases GainRatio feature weighting was applied. For details on the different possible algorithms see (Daelemans et al., 2002b) .", |
| "cite_spans": [ |
| { |
| "start": 662, |
| "end": 687, |
| "text": "(Daelemans et al., 2002b)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Memory-Based PoS Tagger", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "As the third member of the comparison, we used a Brill-style transformation-based tagger (TBL) (Brill, 1995) for Dutch (Drenth, 1997 and tags are modeled by starting out with an imperfect tagging which is gradually transformed into one with fewer errors. This is achieved by selecting and sequencing transformation rules using the learning algorithm.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 108, |
| "text": "(Brill, 1995)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 119, |
| "end": 132, |
| "text": "(Drenth, 1997", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transformation-Based PoS Tagger", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In an initial step, each word is assigned a tag independent of context. A known word is assigned its most likely tag determined by a maximum likelihood estimation from the training corpus. An unknown word, on the other hand, is assigned a tag based on lexical rules learned during training. All unknown words are initially tagged N. The application of lexical rules adapts the tag (where necessary) based on the local properties of the unknown word, such as its suffix.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transformation-Based PoS Tagger", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "After each word has received an initial tag, contextual rules are applied changing the initial PoS tag (where necessary) based on the context of the word to be tagged. The best contextual transformation rules and their order of application are selected by the learning algorithm during training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transformation-Based PoS Tagger", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The present implementation of the TBL PoS tagger for Dutch uses around 250 lexical rules and 300 contextual rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transformation-Based PoS Tagger", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "As we have mentioned earlier, the stand-alone results for the PoS taggers were computed using 80% of the Eindhoven Corpus (containing a total of 760,000 words) for training and 10% for tuning. The accuracy shown in table 1 was computed on the remaining 10% of the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stand-Alone Results for the PoS Taggers", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "We can clearly see that the MBT tagger is performing best, followed by the HMM tagger, the least accurate tagger being the TBL tagger. 5 If the hypothesis that more accurate input to complex systems will produce more accurate results is correct, then these stand-alone results raise the expectation that when applying all three taggers in our WSD system-with all other settings being equalaccuracy should be highest when the MBT tagger was used to tag the data. Performance is expected to decrease with the use of the HMM tagger and to be lowest for the TBL tagger.", |
| "cite_spans": [ |
| { |
| "start": 135, |
| "end": 136, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stand-Alone Results for the PoS Taggers", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "This expectation might be falsified by the (possible) corpus dependency of the three PoS taggers: the capacity to generalize from the training corpus to the corpus to be tagged might be bigger in one tagger than in another, which means that the results obtained in the complex system can diverge from the expectation raised by the stand-alone results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stand-Alone Results for the PoS Taggers", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Let us now turn to the application in which we will use the three PoS taggers presented and evaluated above.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stand-Alone Results for the PoS Taggers", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Semantic lexical ambiguity remains a major problem in natural language processing (NLP) for which to date no satisfactory solution has been found. Word sense disambiguation (WSD) refers to the resolution of lexical semantic ambiguity and its goal is to attribute the correct sense(s) to words in a certain context. Accurate disambiguation of word senses is important for e.g. machine translation, information retrieval or document extraction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Disambiguation for Dutch", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The WSD system used in these experiments is a supervised corpus-based algorithm combining statistical classification with different kinds of linguistic information. This system explores the intuition that (high quality) linguistic information is beneficial for WSD. PoS is definitely one of the more accessible sources of linguistic knowledge. The hypothesis behind comparing various PoS taggers in this application is that the quality of the PoS tags assigned to the data can significantly influence the accuracy obtained by our WSD system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Disambiguation for Dutch", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In contrast to the English WSD data, the Dutch Senseval-2 WSD data is ambiguous with regard to PoS. This means that accurate PoS information is even more important since the WSD system is supposed to do morpho-syntactic as well as semantic disambiguation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Disambiguation for Dutch", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We will now first explain the statistical classification algorithm used and then proceed to describe the WSD system, its settings as well as the corpus used to generate the comparative results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Disambiguation for Dutch", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The statistical classifier used in the experiments reported here is a maximum entropy classifier (Berger et al., 1996) . Maximum entropy is a general technique for estimating probability distributions from data. If nothing about the data is known, it involves selecting the most uniform distribution where all events have equal probability. In other words, it means selecting the distribution which maximises the entropy.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 118, |
| "text": "(Berger et al., 1996)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Classification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "If data is available, labeled training data is seen as a number of features which are used to derive a set of constraints for the model. This set of constraints characterises the class-specific expectations for the distribution. So, while the distribution should maximise the entropy, the model should also satisfy the constraints imposed by the labeled training data. A maximum entropy model is thus the model with maximum entropy of all models that satisfy the set of constraints derived from the training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Classification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The maximum entropy model is built using the following formula:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Classification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u00a2 \u00a2 \u00a1 \u00a3 # \u00a5 \u00a4 \u00a6 \u00a7 \u00a3 \u00a9 \u00a6 \u00a6 \u00a6 \u00a7 \u00a2 \u00a3 \u00a1 !", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Classification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where the property function The main advantage of maximum entropy modeling is that the property functions, including all the different types of (linguistic) information in the model, take into account any information which might be useful for disambiguation. Thus, dissimilar types of information can be combined into a single model for WSD and no independence assumptions (as in e.g. a Naive Bayes algorithm) are necessary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Classification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The corpus used in this evaluation is the Dutch Senseval-2 corpus 6 (see (Hendrickx and van den Bosch, 2001 ) for a detailed description). In the experiments reported here, we only made use of the training section of the Dutch Senseval-2 dataset, containing approximately 120,000 tokens and 9,300 sentences.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 107, |
| "text": "(Hendrickx and van den Bosch, 2001", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and System Settings", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In a first step, the corpus is lemmatized and PoS tagged. Then, for each ambiguous wordform/lemma 7 all instances of its occurrence are extracted from the corpus. These instances are then transformed into different feature vectors. So a feature vector of the ambiguous wordform 'aarde' (earth/soil) corresponding to the model which comprises all possible information (incl. PoS) and uses context words would look like this:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and System Settings", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "aarde N gat in de , zodat het aarde grond where the first slot represents the lemma, the second the PoS, the third to eighth slot are the context words (left before right) and the last slot represents the sense or class. 8 Only context words within the same sentence as the ambiguous wordform/lemma were taken into account. If for instance there was no left context, it was filled with \"empty\" features. Varying the information included, different feature sets are constructed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and System Settings", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "For the basic classifier based on ambiguous wordforms, the feature set contains the corresponding lemma as well as a context of three words to the left and to the right of the ambiguous word. For the basic classifier based on ambiguous lemmas, the corresponding wordform and the context are included. The context can either be composed of wordforms or lemmas. For the classifiers including PoS tags, we in addition include the PoS tags of the ambiguous wordform/lemma from the various PoS taggers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and System Settings", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "On the basis of the different feature sets, separate classifiers are built for every ambiguous wordform or lemma. This implies that the basis for group-ing occurrences of particular ambiguous words together is that either their wordform or their lemma is the same. In the experiments presented here, a frequency threshold of 10 was used, which means that classifiers were only built for the wordforms with an amount of training instances equal to or above the threshold. For the remaining wordforms, the baseline count was used, thus assigning the most frequent sense to every instance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and System Settings", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In total, there were 1,364 ambiguous lemmas in the corpus of which 622 presented 10 or more occurrences, and 952 ambiguous wordforms of which 486 had 10 or more occurrences. So 622 lemma classifiers and 486 wordform classifiers were built.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and System Settings", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The context was treated as a 'bag of words' which means that the position of a context word relative to the ambiguous wordform was not taken into account. This approach was chosen to help limit the data sparseness problem: if the context features are all treated dependent on their position relative to the ambiguous word in the sentence, the model will have more features to assign weights to. This means that the sparse data problem will be worse. If, on the other hand, context features are \"lumped\" together independent of their relative position, there are less features to be estimated and there is more data for the particular feature 'context'.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and System Settings", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Before we turn to the actual results of using the different PoS taggers in our WSD system for Dutch, let us first compare the differences regarding the assigned PoS tags. Table 2 shows the distribution of the different PoS tags in the WSD data depending on the PoS tagger used, as well as the distribution of the PoS tags in the training corpus. A major difference between the distribution of PoS tags is that both the HMM and MBT tagger assign more V tags, whereas the TBL tagger assigns more N tags. The preference for N tags in the TBL tagger can be explained by the fact that all unknown words initially get tagged N. Also, in Dutch verbal infinitives have the same morphological suffix as plural nouns (-en). INT and Misc differ with all three taggers, but we could not detect any obvious reason for this. As we can see from are bigger differences between the TBL tagger and the other two, whereas the differences between the HMM and the MBT tagger are less noticeable.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 171, |
| "end": 178, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Evaluation of the WSD Application", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In order to test the real error of the classifiers built, we used a leave-one-out approach (Weiss and Kulikowski, 1991; Manning and Sch\u00fctze, 1999) . This means that every data item in turn is selected once as a test item and the classifier is trained on all remaining items. The accuracy of a single classifier is then the number of data items correctly predicted. The overall accuracy is the total of data items correctly predicted by all classifiers.", |
| "cite_spans": [ |
| { |
| "start": 91, |
| "end": 119, |
| "text": "(Weiss and Kulikowski, 1991;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 120, |
| "end": 146, |
| "text": "Manning and Sch\u00fctze, 1999)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Evaluation of the WSD Application", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The results in table 3 show the average accuracy on our training data using leave-one-out as a test method with respectively wordforms and lemmas as basis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Evaluation of the WSD Application", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As the table of results shows, the WSD system performs well. The basic classifiers containing a minimum of information already do significantly better than the frequency baseline. 9 Furthermore, adding PoS as extra linguistic information-next to the lemma/wordform and the context already included in the basic classifiers-does increase results over the accuracy achieved with a basic classifier. This supports the underlying hypothesis behind the WSD system that more linguistic information is beneficial for WSD. Since the WSD data needs to be disambiguated morpho-syntactically as well as with 9 Assigning the most frequent sense to every occurrence of an ambiguous wordform/lemma. regard to lexical semantic ambiguity, it is not surprising that adding PoS information achieves better results than only using the lemma/wordform and context.", |
| "cite_spans": [ |
| { |
| "start": 597, |
| "end": 598, |
| "text": "9", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Evaluation of the WSD Application", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Comparing the performance among the different PoS taggers, we can see quite clearly that our expectations are (partly) confirmed: the MBT tagger, which did best in the stand-alone evaluation, is also working best in the WSD system. This is the case for all setups: using wordforms or lemmas as basis for the classifiers, as well as for classifiers including context as wordforms or as lemmas. 10 Surprisingly enough, the hypothesis does not hold for the \"ranking\" of the HMM and TBL taggers. Despite the fact that the HMM tagger performed second best in the stand-alone evaluation, it does not perform better than the TBL tagger when integrated into the WSD system.", |
| "cite_spans": [ |
| { |
| "start": 393, |
| "end": 395, |
| "text": "10", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Evaluation of the WSD Application", |
| "sec_num": "4" |
| }, |
| { |
| "text": "A possible explanation might be that the difference between the training corpus and the WSD data is so big that the HMM tagger is no longer more accurate than the TBL tagger in the WSD application, leading to the conclusion that the HMM tagger is more corpus dependent than the TBL tagger. A possible reason might be that the heuristics for unknown 10 Applying the paired sign test with a confidence level of 95%, all results using MBT PoS tags were found to be statistically significantly better than results with other PoS tags (and than the basic classifiers). The classifiers including TBL and HMM PoS tags do not differ significantly from each other, but both perform significantly better than the basic classifiers. Table 3 : WSD results (in %) comparing the effect of integrating the output of different POS-taggers into a complex system words in the HMM tagger produces worse results on the WSD data than the heuristics used by the TBL tagger. Since no gold-standard PoS tagged version of the WSD data exists, it is difficult to investigate this puzzle any further. Nevertheless, our hypothesis that highly accurate input influences the results of a complex system is at least partly verified: the most accurate PoS tags also produce the most accurate results when integrated into our WSD system.", |
| "cite_spans": [ |
| { |
| "start": 349, |
| "end": 351, |
| "text": "10", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 722, |
| "end": 729, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Evaluation of the WSD Application", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this paper, we tested the hypothesis whether high quality input improves the final results of a complex NLP system. We have therefore proceeded to an application-oriented evaluation of three PoS taggers in a WSD system. A transformation-based tagger, a Hidden Markov Model tagger, and a memory-based tagger were compared for this purpose.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "After the MBT tagger has been established as the most accurate tagger in a stand-alone evaluation, the PoS information from all three taggers is integrated into our WSD system for Dutch. This supervised system uses maximum entropy classifiers which allow to integrate various sources of information into a single model. The results computed on the training part of the Dutch Senseval-2 corpus show that the MBT tagger also produces the best results in the WSD system. This clearly indicates that highly accurate input into a WSD system is producing better results than qualitatively lesser input.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "A surprising result, however, was the fact that the performance of the complex WSD system with the different PoS tags included does not necessarily reflect the stand-alone accuracy of the PoS taggers. Even though the HMM tagger performed better than the TBL tagger in the stand-alone comparison, there is no significant difference to be observed in the results of the WSD system. A possible explanation might be corpus dependency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For future work, we would like to include the PoS tags of the context wordforms or lemmas to see whether our hypothesis still holds then. It would also be interesting to see whether the overall results are further improved by this additional information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "See table 2 for a distribution of the main PoS tag categories in the WSD data and the Eindhoven corpus.2 See http://www.let.rug.nl/\u02dcvannoord/alp.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Words in sentence initial position are decapitalized beforehand.4 Freely available for research purposes at http://ilk. uvt.nl/software.html.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "All results differ significantly applying the paired sign test with a confidence level of 95%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For more information on Senseval and for downloads of the data see http://www.senseval.org/.7 A wordform/lemma is 'ambiguous' if it has two or more different senses in the training data. The sense '=' is seen as marking the basic sense of a word/lemma and is therefore also taken into account.8 'Sense' or 'class' refers to the different labels which disambiguate the ambiguous wordforms/lemmas.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This research was carried out within the framework of the PIONIER Project Algorithms for Linguistic Processing. This PIONIER Project is funded by NWO (Dutch Organization for Scientific Research) and the University of Groningen. We are grateful to Robbert Prins for his help with the HMM tagger as well as to Gertjan van Noord and Menno van Zaanen for comments and discussions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A maximum entropy approach to natural language processing", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Berger", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "Della" |
| ], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [ |
| "Della" |
| ], |
| "last": "Pietra", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Computational Linguistics", |
| "volume": "22", |
| "issue": "1", |
| "pages": "39--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Berger, Stephen Della Pietra, and Vincent Della Pietra. 1996. A maximum entropy approach to nat- ural language processing. Computational Linguistics, 22(1):39-71.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "WOTAN-een automatische grammaticale tagger voor het Nederlands", |
| "authors": [ |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Berghmans", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johan Berghmans. 1994. WOTAN-een automatische grammaticale tagger voor het Nederlands. Master's thesis, Nijmegen University, Nijmegen.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Alpino: Wide-coverage computational analysis of Dutch", |
| "authors": [ |
| { |
| "first": "Gosse", |
| "middle": [], |
| "last": "Bouma", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Gertjan Van Noord", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Malouf", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Computational Linguistics in the Netherlands", |
| "volume": "", |
| "issue": "", |
| "pages": "45--59", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gosse Bouma, Gertjan van Noord, and Robert Malouf. 2001. Alpino: Wide-coverage computational analy- sis of Dutch. In Walter Daelemans, Khalil Sima'an, Jorn Veenstra, and Jakub Zavrel, editors, Computa- tional Linguistics in the Netherlands 2000, pages 45- 59, Amsterdam. Rodopi.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Transformation-based error-driven learning and natural language processing: A case study in part of speech tagging", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Computational Linguistics", |
| "volume": "21", |
| "issue": "4", |
| "pages": "543--565", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part of speech tagging. Computational Lin- guistics, 21(4):543-565.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Head-Driven Statistical Models for Natural Language Parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 1999. Head-Driven Statistical Mod- els for Natural Language Parsing. Ph.D. thesis, Com- puter and Information Science Department, University of Pennsylvania, Philadelphia.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Finite state tools for natural language processing", |
| "authors": [], |
| "year": 2000, |
| "venue": "Proceedings of the COLING 2000 Workshop \"Using Toolsets and Architectures to Build NLP Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "34--37", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jan Daciuk. 2000. Finite state tools for natural lan- guage processing. In Proceedings of the COLING 2000 Workshop \"Using Toolsets and Architectures to Build NLP Systems\", pages 34-37, Centre Universi- taire, Luxembourg.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "MBT: Memory-Based tagger, reference guide", |
| "authors": [ |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Zavrel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ko Van Der", |
| "suffix": "" |
| }, |
| { |
| "first": "Antal", |
| "middle": [], |
| "last": "Sloot", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Van Den", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bosch", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2002a. MBT: Memory-Based tagger, reference guide. Technical Report ILK 02-09, Induction of Linguistic Knowledge, Computational Linguistics, Tilburg University, Tilburg. version 1.0.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "TiMBL: Tilburg Memory-Based learner, reference guide", |
| "authors": [ |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Zavrel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ko Van Der", |
| "suffix": "" |
| }, |
| { |
| "first": "Antal", |
| "middle": [], |
| "last": "Sloot", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Van Den", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bosch", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2002b. TiMBL: Tilburg Memory-Based learner, reference guide. Technical Report ILK 02-10, Induction of Linguistic Knowl- edge, Computational Linguistics, Tilburg University, Tilburg. version 4.3.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Using a hybrid approach towards Dutch part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "Erwin", |
| "middle": [ |
| "W" |
| ], |
| "last": "Drenth", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erwin W. Drenth. 1997. Using a hybrid approach to- wards Dutch part-of-speech tagging. Master's the- sis, Humanities Computing, University of Groningen, Groningen.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Dutch word sense disambiguation: Data and preliminary results", |
| "authors": [ |
| { |
| "first": "Iris", |
| "middle": [], |
| "last": "Hendrickx", |
| "suffix": "" |
| }, |
| { |
| "first": "Antal", |
| "middle": [], |
| "last": "Van Den", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bosch", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of Senseval-2, Second International Workshop on Evaluating Word Sense Disambiguation Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "13--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iris Hendrickx and Antal van den Bosch. 2001. Dutch word sense disambiguation: Data and preliminary re- sults. In Proceedings of Senseval-2, Second Interna- tional Workshop on Evaluating Word Sense Disam- biguation Systems, pages 13-16, Toulouse.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Evaluating the results of a Memory-Based word-expert approach to unrestricted word sense disambiguation", |
| "authors": [ |
| { |
| "first": "V\u00e9ronique", |
| "middle": [], |
| "last": "Hoste", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| }, |
| { |
| "first": "Iris", |
| "middle": [], |
| "last": "Hendrickx", |
| "suffix": "" |
| }, |
| { |
| "first": "Antal", |
| "middle": [], |
| "last": "Van Den", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bosch", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the ACL-02 Workshop on Word Sense Disambiguation: Recent Successes and Future Directions", |
| "volume": "", |
| "issue": "", |
| "pages": "95--101", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V\u00e9ronique Hoste, Walter Daelemans, Iris Hendrickx, and Antal van den Bosch. 2002. Evaluating the results of a Memory-Based word-expert approach to unre- stricted word sense disambiguation. In Proceedings of the ACL-02 Workshop on Word Sense Disambigua- tion: Recent Successes and Future Directions, pages 95-101, Philadelphia.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Foundations of Statistical Natural Language Processing", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher Manning and Hinrich Sch\u00fctze. 1999. Foun- dations of Statistical Natural Language Processing. MIT Press, Cambridge.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Reinforcing parser preferences through tagging", |
| "authors": [ |
| { |
| "first": "Robbert", |
| "middle": [], |
| "last": "Prins", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gertjan Van Noord", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robbert Prins and Gertjan van Noord. 2003. Reinfor- cing parser preferences through tagging. Traitement automatique des langues. forthcoming.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Woordfrequenties in Geschreven and Gesproken Nederlands. Oosthoek, Scheltema en Holkema", |
| "authors": [], |
| "year": 1975, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pieter uit den Boogaart. 1975. Woordfrequenties in Geschreven and Gesproken Nederlands. Oosthoek, Scheltema en Holkema, Utrecht.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The Alpino dependency treebank", |
| "authors": [ |
| { |
| "first": "Gosse", |
| "middle": [], |
| "last": "Leonoor Van Der Beek", |
| "suffix": "" |
| }, |
| { |
| "first": "Rob", |
| "middle": [], |
| "last": "Bouma", |
| "suffix": "" |
| }, |
| { |
| "first": "Gertjan", |
| "middle": [], |
| "last": "Malouf", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Van Noord", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Computational Linguistics in the Netherlands", |
| "volume": "", |
| "issue": "", |
| "pages": "8--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leonoor van der Beek, Gosse Bouma, Rob Malouf, and Gertjan van Noord. 2002. The Alpino dependency treebank. In Mari\u00ebt Theune, Anton Nijholt, and Hen- dri Hondorp, editors, Computational Linguistics in the Netherlands 2001, pages 8-22, Amsterdam. Rodopi.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Memory-Based word sense disambiguation", |
| "authors": [ |
| { |
| "first": "Jorn", |
| "middle": [], |
| "last": "Veenstra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Van Den", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Bosch", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Buchholz", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zavrel", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Computers and the humanities", |
| "volume": "34", |
| "issue": "1-2", |
| "pages": "171--177", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jorn Veenstra, Antal van den Bosch, Sabine Buchholz, Walter Daelemans, and Jakub Zavrel. 2000. Memory- Based word sense disambiguation. Computers and the humanities, 34(1-2):171-177.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Computer Systems that Learn", |
| "authors": [ |
| { |
| "first": "Sholom", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "Casimir", |
| "middle": [], |
| "last": "Kulikowski", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sholom Weiss and Casimir Kulikowski. 1991. Computer Systems that Learn. Morgan Kaufman, San Mateo.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Recent advances in Memory-Based part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Zavrel", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "VI Simposio Internacional de Communicaci\u00f3n Social", |
| "volume": "", |
| "issue": "", |
| "pages": "590--597", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jakub Zavrel and Walter Daelemans. 1999. Recent ad- vances in Memory-Based part-of-speech tagging. In VI Simposio Internacional de Communicaci\u00f3n Social, pages 590-597, Santiago de Cuba.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "uris": null, |
| "text": "maximise the likelihood of the training data and, at the same time, maximise the entropy of . This means that during training the weight \u00a6 for each feature 0 present in the training data is computed and stored. During testing, the sum of the weights \u00a6 of all features 0 found in the test instances is computed for each class \u00a1 and the class with the highest score is chosen.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF1": { |
| "content": "<table><tr><td>, there</td></tr></table>", |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF2": { |
| "content": "<table/>", |
| "text": "Frequencies of PoS tags assigned by each PoS tagger in the WSD data and distribution of PoS in the training corpus", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |