| { |
| "paper_id": "L16-1015", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:03:37.016301Z" |
| }, |
| "title": "If You Even Don't Have a Bit of Bible: Learning Delexicalized POS Taggers", |
| "authors": [ |
| { |
| "first": "Zhiwei", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Charles University", |
| "location": { |
| "settlement": "Prague" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mare\u010dek", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Charles University", |
| "location": { |
| "settlement": "Prague" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Zden\u011bk\u017eabokrtsk\u00fd", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Charles University", |
| "location": { |
| "settlement": "Prague" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Charles University", |
| "location": { |
| "settlement": "Prague" |
| } |
| }, |
| "email": "zeman@ufal.mff.cuni.cz" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Part-of-speech (POS) induction is one of the most popular tasks in research on unsupervised NLP. Various unsupervised and semisupervised methods have been proposed to tag an unseen language. However, many of them require some partial understanding of the target language because they rely on dictionaries or parallel corpora such as the Bible. In this paper, we propose a different method named delexicalized tagging, for which we only need a raw corpus of the target language. We transfer tagging models trained on annotated corpora of one or more resource-rich languages. We employ language-independent features such as word length, frequency, neighborhood entropy, character classes (alphabetic vs. numeric vs. punctuation) etc. We demonstrate that such features can, to certain extent, serve as predictors of the part of speech, represented by the universal POS tag (Das and Petrov, 2011).", |
| "pdf_parse": { |
| "paper_id": "L16-1015", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Part-of-speech (POS) induction is one of the most popular tasks in research on unsupervised NLP. Various unsupervised and semisupervised methods have been proposed to tag an unseen language. However, many of them require some partial understanding of the target language because they rely on dictionaries or parallel corpora such as the Bible. In this paper, we propose a different method named delexicalized tagging, for which we only need a raw corpus of the target language. We transfer tagging models trained on annotated corpora of one or more resource-rich languages. We employ language-independent features such as word length, frequency, neighborhood entropy, character classes (alphabetic vs. numeric vs. punctuation) etc. We demonstrate that such features can, to certain extent, serve as predictors of the part of speech, represented by the universal POS tag (Das and Petrov, 2011).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Part-of-speech (POS) tagging is sometimes considered an almost solved problem in NLP. Standard supervised approaches often reach accuracy above 95% if sufficiently large hand-labeled training data are available (typically several hundred thousand tokens or more). However, we still believe that it makes sense to study semi-supervised and unsupervised approaches because of the following reasons:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "\u2022 It is hardly realistic to expect that manual annotation efforts will be ever invested into all 7,000 languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "\u2022 Even if it might be very efficient to start at least with some small annotated data, we believe that adding new features independent of hand-tagged text might be helpful in a combination of supervised and unsupervised methods, e.g., for better handling of out-ofvocabulary words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "\u2022 We should keep in mind that the \"standard\" POS distinctions-although broadly used-are not manifested in languages directly. They result from certain linguistic tradition whose current dominance can be attributed to geopolitical reasons rather than its linguistic \"obviousness\". Thus, for instance, if we say that something is an adverb in language X, we should be able to support such a claim by some measurable evidence rather than just by saying that it becomes an adverb if translated to English.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "\u2022 For some multilingual NLP tasks, such as unsupervised dependency parsing (or parser transfer), it might be more important to preprocess all languages under study as similarly as possible (including POS tagging), rather than to maximize accuracy with respect to highly different gold-standard data in individual languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "We propose \"delexicalized tagging\", a new method for under-resourced languages. In analogy to delexicalized parsing (Zeman and Resnik, 2008) , we transfer a tagging model from a resource-rich language (or a set of languages); the model is independent of individual word forms. In delexicalized parsing, word form sequences are substituted by sequences of POS tags, which-of courseis not extendable to tagging. Instead, we substitute word forms by vectors of numerical features that can be computed using only unannotated monolingual texts. The background intuition is that the individual POS categories will tend to manifest similar statistical properties across languages (e.g., prepositions tend to be short, relatively frequent, showing different patterns of conditional entropy to the left versus to the right, as well as certain asymmetry of occurrences along sentence length). Thus, unlike most POS tagging methods for resource-poor languages, we do not transfer the tagging knowledge using dictionaries or parallel data, but exclusively via the R n space. 1 In addition, we present a new publicly available resource containing POS-labeled texts for 107 languages, automatically tagged by the presented approach.", |
| "cite_spans": [ |
| { |
| "start": 116, |
| "end": 140, |
| "text": "(Zeman and Resnik, 2008)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1063, |
| "end": 1064, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "There is a body of literature about POS tagging of under-resourced languages. Most approaches rely on the existence of some form of parallel (or comparable) data. We will discuss only those approaches that attempt at using the same tagset across languages, and not those aiming at unsupervised induction, such as the well-known Brown clusters induced in a fully unsupervised fashion (Brown et al., 1992) . An overview of such truly unsupervised approaches can be found in (Christodouloupoulos et al., 2010 ). 2 (Yarowsky and Ngai, 2001 ) project POS tags from English to French and Chinese via both automatic and gold alignment, and report substantial growth of accuracy after using de-noising postprocessing. (Fossum and Abney, 2005) extend this approach by projecting multiple source languages onto a target language. (Das and Petrov, 2011) use graph-based label propagation for cross-lingual knowledge transfer, and estimate emission distributions in the target language using a loglinear model. (Duong et al., 2013) choose only automatically recognized \"good\" sentences from the parallel data, and further apply self-training. (Agi\u0107 et al., 2015) learn taggers for 100 languages using aligned Bible verses from The Bible Corpus (Christodouloupoulos et al., 2010) .", |
| "cite_spans": [ |
| { |
| "start": 383, |
| "end": 403, |
| "text": "(Brown et al., 1992)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 472, |
| "end": 505, |
| "text": "(Christodouloupoulos et al., 2010", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 511, |
| "end": 535, |
| "text": "(Yarowsky and Ngai, 2001", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 710, |
| "end": 734, |
| "text": "(Fossum and Abney, 2005)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 820, |
| "end": 842, |
| "text": "(Das and Petrov, 2011)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 999, |
| "end": 1019, |
| "text": "(Duong et al., 2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1131, |
| "end": 1150, |
| "text": "(Agi\u0107 et al., 2015)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1232, |
| "end": 1266, |
| "text": "(Christodouloupoulos et al., 2010)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Besides approaches based on parallel data, there are also experiments showing that reasonable POS tagging accuracy (close to 90 %) can be reached using quick and efficient prototyping techniques, such as (Cucerzan and Yarowsky, 2002) . However, such approaches rely on at least partial understanding of the target language grammar, and on the availability of a dictionary, hence they do not scale well when it comes to tens or hundreds of languages (Cucerzan and Yarowsky experiment with two languages only).", |
| "cite_spans": [ |
| { |
| "start": 204, |
| "end": 233, |
| "text": "(Cucerzan and Yarowsky, 2002)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "We propose a statistical method to predict the POS tags in a previously unseen language. The method is quite different from those described above. Our system needs just a raw corpus of the target language-something that can be easily obtained for a large number of world's languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Delexicalized Tagging", |
| "sec_num": "3." |
| }, |
| { |
| "text": "We proceed as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Overview", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "1. we identify the sets of source languages (those for which we have POS labeled data) and target languages (those for which we have sufficiently big monolingual data and which we want to label by our method), 2. for each word type in the source and target languages, we extract a feature vector that describes its statistical properties in the corresponding monolingual corpus, 3. for all source languages, each word feature vector and its POS tag are used as a training instance for a classifier, and the resulting classifier is used to assign POS tags to all words' feature vectors in the target languages, 4. we evaluate our approach on the target languages for which there are labeled data available, and assume that reasonably similar accuracies are reached also for the other target languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Overview", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "A prerequisite to our approach is a common tagset for both the source and the target languages. We use the same tagset as (Das and Petrov, 2011) , the Google Universal POS tag set (Petrov et al., 2012) . With just 12 tags it is fairly coarse-grained, which is advantageous for a resource-poor method such as ours; nevertheless it has proved useful in downstream applications such as parsing. The 12 tags are NOUN, VERB, ADJ (adjective), ADV (adverb), PRON (pronoun), DET (determiner), NUM (numeral), ADP (adposition), CONJ (conjunction), PRT (particle), PUNC (punctuation) and X (unknown).", |
| "cite_spans": [ |
| { |
| "start": 122, |
| "end": 144, |
| "text": "(Das and Petrov, 2011)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 180, |
| "end": 201, |
| "text": "(Petrov et al., 2012)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagset", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "This tagset was recently extended in the Universal Dependencies project 3 (Nivre et al., 2016) : five categories were split to finer subclasses. Using this larger tagset in our experiments is likely to reduce reliability of the results.", |
| "cite_spans": [ |
| { |
| "start": 74, |
| "end": 94, |
| "text": "(Nivre et al., 2016)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagset", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "The list below describes the features that we use for the POS prediction. Let us define our notation first. Let C be a corpus and c i the i-th token in the corpus. N = |C| = the number of tokens in the corpus C. f (w) = |{i : c i = w}| = the absolute word frequency, i.e. number of instances of the word type w in the corpus C. Similarly, f (x, y) is the absolute frequency of the word bigram xy. P re(w) = {x : \u2203i (c i = w) \u2227 (c i\u22121 = x)} is the set of word types that occur at least once in a position preceding an instance of w. Analogously, N ext(w) denotes the set of word types following w in the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "Context(w) = {x, y : \u2203i (c i\u22121 = x) \u2227 (c i = w) \u2227 (c i+1 = y)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "} denotes the set of contexts surrounding w, and Subst(w) = {y : Context(y) \u2229 Context(w) = \u2205} is the set of words that share a context with w.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "1. word length -the number of characters in w 2. log frequency -logarithm of the relative frequency of", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "w in C log f (w) N 3. preceding word entropy P N = y\u2208P re(w) f (y) y\u2208P re(w) \u2212 f (y) P N log f (y) P N 4. following word entropy N N = y\u2208N ext(w) f (y) y\u2208N ext(w) \u2212 f (y) N N log f (y) N N 5. substituting word entropy SN = y\u2208Subst(w) f (y) y\u2208Subst(w) \u2212 f (y) SN log f (y) SN 6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": ". is number -binary value is number(w), 7. is punctuation -binary value is punctuation(w),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "8. relative frequency after number", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "log |i : c i = w \u2227 is number(c i\u22121 )| f (w)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "9. relative frequency after punctuation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "log |i : c i = w \u2227 is punctuation(c i\u22121 ) f (w)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "10. weighted sum of pointwise mutual information (PMI) of w with the preceding word -collect all words y in C that precede w, then calculate their PMI values with w and make summation of PMIs weighted by the joint probability of the pair", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "y\u2208P re(w) f (w, y) \u00d7 log N \u00d7f (w,y) f (w)\u00d7f (y) N", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "11. weighted sum of PMI of w with the following wordfully analogous to the previous feature,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "12. entropy of suffixes following the root of w -First we collect counts of suffixes count(suf f ix) in C whose length range from 1 to 4 and counts of respective roots (words without suffixes) count(root) in C. For each word, we find the border between root and suffix by maximization of the product f (root) \u00d7 f (suf f ix).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "Then, we compute conditional entropy over all suffixes given the root. 4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "13. how many different words appear before w: |P re(w)|", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "14. how many different words appear after w:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "|N ext(w)|", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "15. how many different words in C share the same context as w:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "|Subst(w)|", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "16. pointwise mutual information between w and the most frequent preceding word", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "M axP = arg max y\u2208P re(w) f (y) log N \u00d7 f (w, M axP ) f (w) \u00d7 f (M axP )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "17. pointwise mutual information between w and the most frequent following word -fully analogous to the previous feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "\u2022 POS-tagged data for source languages; this data is used for training POS classifiers; we use HamleDT 2.0 (Zeman et al., 2014) , a collection of treebanks for 30 languages.", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 127, |
| "text": "(Zeman et al., 2014)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "We took the first 50,000 tokens from the HamleDT 2.0 training sections of 13 languages (ISO 639-1 codes): bg, ca, cs, de, el, en, hi, hu, it, pt, ru, sv, tr. Each token was considered one training instance (i.e., n occurrences of a word w results in n identical instances). Their word feature vectors were computed using at most the first 20 million tokens from the WEB part of the W2C corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training POS Classifiers", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "We experimented with several types of classifiers:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training POS Classifiers", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "\u2022 Baseline We assign PUNC to all tokens consisting of non-alphanumerical characters, NUM to all tokens containing a digit, and NOUN to the remaining tokens.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training POS Classifiers", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "\u2022 K-nearest-neighbors (KNN) (Cover and Hart, 1967) , with k = 100.", |
| "cite_spans": [ |
| { |
| "start": 28, |
| "end": 50, |
| "text": "(Cover and Hart, 1967)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training POS Classifiers", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "\u2022 Support vector machines (SVM) with radial kernel (Boser et al., 1992) .", |
| "cite_spans": [ |
| { |
| "start": 51, |
| "end": 71, |
| "text": "(Boser et al., 1992)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training POS Classifiers", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "\u2022 Bagging (Breiman, 1996) applied both on KNN and SVM. We randomly sampled the training instances with replacement and randomly extracted half of the whole feature space with replacement.", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 25, |
| "text": "(Breiman, 1996)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training POS Classifiers", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "\u2022 Random Forest (Ho, 1995) .", |
| "cite_spans": [ |
| { |
| "start": 16, |
| "end": 26, |
| "text": "(Ho, 1995)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training POS Classifiers", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "\u2022 Gradient Tree Boosting (Friedman, 2002) .", |
| "cite_spans": [ |
| { |
| "start": 25, |
| "end": 41, |
| "text": "(Friedman, 2002)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training POS Classifiers", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "We trained classifiers for each source language separately and then for concatenated data of the following 7 languages: bg, ca, de, el, hi, hu, tr (in our results, we refer to these combined data as \"c7\").", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training POS Classifiers", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "The first 1000 tokens from the HamleDT 2.0 test sections of the following languages were used for evaluation: bg, bn, ca, cs, da, de, el, es, en, et, eu, fa, fi, hi, hu, it, la, nl, pt, ro, ru, sk, sl, sv, te, tr. Again, feature vectors for individual words are based on the the WEB component of W2C. Naturally there will be words in the test data that have not been observed in W2C. Since we cannot compute the features of these out-of-vocabulary words, we predict their tag as NUM if they contain a digit, and as NOUN otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3.6." |
| }, |
| { |
| "text": "Each target language was evaluated separately for each source language, and then for the above mentioned mixture of 7 languages (c7); The results using the Bagging SVM classifier are summarized in Highlighted results indicate that the same language was used for training and testing. Bold indicates the best result where the target language was not used in training. The avg column shows the average accuracy for given language (not counting the highlighted results). The c7 training data stands for the concatenation of 7 source languages: bg, ca, de, el, hi, hu, and tr. Table 2 compares the scores of different classifiers. All the classifiers were trained on c7; the languages included in c7 were excluded from the testing set. The standard SVM classifier performs better than KNN, the average tagging accuracy on c7 is 4.7% higher and it is better on 15 out of 19 languages. Bagging improves the average accuracy of KNN by 3%. The SVM's average accuracy slightly decreases when bagging is used, however, 9 out of 19 languages are tagged better. We observed improvement for both classifiers also over models trained on individual languages. The Gradient tree boosting classifier is by 1.4% worse than SVM. (Ho, 1998) suggests to expand feature vectors by using certain functions of the original features (e.g., pairwise summation, pairwise differences, pairwise products and boolean combination for binary and categorical features). For the Random forest classifier, we used the same techniques as bagging (sampled both instances and features with replacement) and we also expanded the feature space from 17 to 20 features using feature combination methods. 5 Even though the combined features do not contribute new information, being able to weigh their concurrent appearance actually increases accuracy. Table 3 shows the confusion matrix of tag prediction. It is no surprise that punctuation (PUNC) is the easiest category to predict. At the other end of the scale, the X category will intuitively contain words of mixed nature, which is impossible to predict.", |
| "cite_spans": [ |
| { |
| "start": 1210, |
| "end": 1220, |
| "text": "(Ho, 1998)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1662, |
| "end": 1663, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 573, |
| "end": 580, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 1810, |
| "end": 1817, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3.6." |
| }, |
| { |
| "text": "Certain idiosyncrasies of tokenization schemes negatively affect the results. The underscore (\" \") token is ex- tensively used in Catalan, Spanish (dropped pronominal subjects) and in Turkish (representing stages of morphological derivation). Hindi has empty NULL nodes (often but not always representing elided verbs). Several languages contain multi-word expressions collapsed into one token (e.g. [es] Tribunal Supremo de Justicia); since these are not naturally occurring strings, they are out of our vocabulary (they have no footprint in the W2C corpus).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "4." |
| }, |
| { |
| "text": "V E R B P U N C A D J A D P P R O N C O N J A D V N U M X P R T D E T Gold Predicted", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "4." |
| }, |
| { |
| "text": "We could remove the \"NULL\" and \" \" nodes from both training and testing data to get results that are closer to realworld application. Note however that we cannot automatically split the multi-word expressions because we do not have gold tags for the individual words. The model is quite successful in predicting prepositions (ADP), conjunctions (CONJ), nouns (NOUN; this is the most frequent part of speech in most languages, hence our recall is significantly higher than precision) and numerals (NUM; numbers expressed by digits, which are as easy as punctuation, help to boost this category).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "4." |
| }, |
| { |
| "text": "On the other hand, the model is unsuccessful in predicting adjectives, adverbs, pronouns and particles. For particles (PRT) the explanation could be that they are poorly defined, or their definition significantly differs across languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "4." |
| }, |
| { |
| "text": "A better definition or even partition of pronouns may also help: personal pronouns do not occur in the same contexts as possessive or relative pronouns, and languages also disagree in the pronoun-determiner distinction. Furthermore, pro-drop languages use personal pronouns much more sparingly than e.g. English or German. Similarly, many languages lack articles (tagged DET). As is apparent from the c7 output, articles such as English a and the often end up tagged as adjectives. That seems a good back-off decision because articles modify noun phrases similarly to adjectives. Obviously, we can improve the results if we know something about the target language and if one or more related languages are available in our source data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Example 1: target Portuguese. When trained on c7, the tagging accuracy is 58%; when trained on Italian, the accuracy jumps to 71%, in spite of the training data being 7 times smaller. One of the c7 languages is Catalan, supposedly close to Portuguese, but the other languages introduce too much noise. Detailed analysis reveals that the Italian model dramatically improves recall of adjectives and prepositions, and precision of numerals. Verbs rise in recall and drop in precision but the F score is still better than with c7. On the other hand, the recall of pronouns is seriously damaged as only 11/72 are correctly identified (while it was 30/72 with c7).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Example 2: target Slovak. When trained on c7, the tagging accuracy is 59%; when trained on Czech, the accuracy jumps to 75%. The only Slavic language in c7 is Bulgarian, and it is an outlier among Slavic languages because it has lost the case system of nouns. Detailed analysis reveals that it is extremely difficult (for both models) to distinguish Slovak adverbs from nouns. On the other hand, prepositions are moderately difficult with the c7 model (P=48%, R=75%) but they are practically solved with the Czech model (P=R=99%). The c7 model mistook many pronouns and other short closed-class words for prepositions. Pronouns, that are in general quite difficult to predict, have poor results with the c7 model (F=20%) but they come quite well with the Czech model (F=79%).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Example 3: target Basque. Basque is an isolated language, without known genetic relationship to any other language. It is an agglutinating language with a comparatively rich case system, so one might be tempted to choose Hungarian as the source language. But the accuracy (see Table 1) would be less than 47%. The best single-source result is yielded by German (57%), which superficially resembles agglutinating languages with its long compound words. Nevertheless, the mixed model proves to be the best source for isolated languages like Basque: the best accuracy, 62%, is achieved with the c7 model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "4." |
| }, |
| { |
| "text": "The configuration that performs best, which is the SVM classifier trained on the mixture of 7 source languages, was used to tag texts in 107 languages selected from the W2C corpus, 1 million tokens per language. This new resource is called Deltacorpus (a corpus tagged by a DELexicalized TAgger) and it is available on-line 6 under the CC BY-SA license. Table 4 gives a summary of the languages. We have excluded languages whose WEB corpus in W2C is too noisy (especially due to wrong language identification), as well as a few Asian languages with non-trivial word segmentation (e.g., Chinese, Japanese and Thai).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 354, |
| "end": 361, |
| "text": "Table 4", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Deltacorpus", |
| "sec_num": "5." |
| }, |
| { |
| "text": "This paper presents a new method for cross-language transfer of POS-tagging models. To the best of our knowledge, this is the first attempt at transferring POS taggers without any bilingual (parallel or comparable) data. We experimented with various language-independent features and several classifiers; the SVM with 17 features, trained on a mix of 7 languages, outperformed other models on our evaluation data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6." |
| }, |
| { |
| "text": "In most cases, the tagging accuracy improved over the baseline. We thus conclude that human-defined word categories naturally incline towards properties which may give them away even in a totally unknown language. The performance is well below results achieved by contemporary methods based on parallel data, however, it is completely independent of the existence of any parallel or comparable corpora or dictionaries.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6." |
| }, |
| { |
| "text": "We released Deltacorpus, a collection of texts in 107 languages tagged by the best classifier, assuming that the tagging accuracy will be comparable to what we observed on our evaluation data. For the sake of completeness, we have also included languages for which better resources exist. However, there are dozens of languages that are not even represented in the Bible corpus. We believe that for these languages Deltacorpus can provide a temporary solution, until more resources are available.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6." |
| }, |
| { |
| "text": "In the future we plan to implement several natural extensions of our approach. For instance, we currently disregard that a word form can have multiple readings, we even disregard local context in sentences to be tagged, and we do not do any weighting of languages according to their similarity or genealogical relatedness. Above all, we would like to explore possible combinations of our approach with the state-of-the-art techniques based on parallel corpora, as we find them complementary. We also plan on releasing a new version of Deltacorpus where the classifiers will be trained on Universal Dependencies treebanks; as a successor of HamleDT, UD should be better harmonized and more reliable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6." |
| }, |
| { |
| "text": "However, we do not say that our method is completely language-independent. For instance, we rely on the existence of a meaningful tokenization in the target language.2 There is a certain terminological confusion in this area: sometimes the word \"unsupervised\" is used also for situations in which there are no hand-tagged data available for the target language, but some manual annotation of the source language exists and is projected across parallel data like in(Das and Petrov, 2011). We prefer to avoid the term \"unsupervised\" when manual annotation is used in any language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://universaldependencies.org/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The underlying intution is that some POSs tend to participate in derivation and inflection more intensively than others. Obviously, the root/suffix segmentation is approximated only very roughly here.3.4. Data ResourcesIn our approach, we need two types of data resources:\u2022 raw monolingual texts for both source and target languages; this data is used for extracting feature vectors for words in individual languages; we use W2C, a web-based corpus of 120 languages(Majli\u0161 an\u010f Zabokrtsk\u00fd, 2012),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Here we used word frequency + number of distinct preceding words, word frequency + number of distinct following words, word frequency + number of distinct words sharing the same context", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://hdl.handle.net/11234/1-1662", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The work was supported by the grants 14-06548P and 15-10472S of the Czech Science Foundation, by the EU project H2020-ICT-2014-1-644402 (HimL), and by the LINDAT/CLARIN project No. LM2015071 of the Ministry of Education, Youth, and Sports of the Czech Republic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "If all you have is a bit of the bible: Learning pos taggers for truly lowresource languages", |
| "authors": [ |
| { |
| "first": "\u017d", |
| "middle": [], |
| "last": "Agi\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "The 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference of the Asian Federation of Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Agi\u0107,\u017d., Hovy, D., and S\u00f8gaard, A. (2015). If all you have is a bit of the bible: Learning pos taggers for truly low- resource languages. In The 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference of the Asian Federation of Natural Language Processing (ACL-IJCNLP 2015).", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A training algorithm for optimal margin classifiers", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [ |
| "E" |
| ], |
| "last": "Boser", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [ |
| "M" |
| ], |
| "last": "Guyon", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "N" |
| ], |
| "last": "Vapnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the Fifth Annual Workshop on Computational Learning Theory, COLT '92", |
| "volume": "", |
| "issue": "", |
| "pages": "144--152", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Boser, B. E., Guyon, I. M., and Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computa- tional Learning Theory, COLT '92, pages 144-152, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Bagging predictors", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Breiman", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Machine Learning", |
| "volume": "24", |
| "issue": "2", |
| "pages": "123--140", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Breiman, L. (1996). Bagging predictors. Machine Learn- ing, 24(2):123-140, August.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Class-based n-gram models of natural language", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "F" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [ |
| "V" |
| ], |
| "last": "Desouza", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "J" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "C" |
| ], |
| "last": "Lai", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Computational Linguistics", |
| "volume": "18", |
| "issue": "4", |
| "pages": "467--479", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brown, P. F., deSouza, P. V., Mercer, R. L., Della Pietra, V. J., and Lai, J. C. (1992). Class-based n-gram models of natural language. Computational Linguistics, 18(4):467-479, December.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Two decades of unsupervised pos induction: How far have we come?", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Christodouloupoulos", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10", |
| "volume": "", |
| "issue": "", |
| "pages": "575--584", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christodouloupoulos, C., Goldwater, S., and Steedman, M. (2010). Two decades of unsupervised pos induction: How far have we come? In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10, pages 575-584, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Nearest neighbor pattern classification", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Cover", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Hart", |
| "suffix": "" |
| } |
| ], |
| "year": 1967, |
| "venue": "IEEE Transactions on Information Theory", |
| "volume": "13", |
| "issue": "", |
| "pages": "21--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cover, T. and Hart, P. (1967). Nearest neighbor pattern classification. IEEE Transactions on Information The- ory, 13:21-27.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Bootstrapping a multilingual part-of-speech tagger in one person-day", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Cucerzan", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 6th Conference on Natural Language Learning", |
| "volume": "20", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cucerzan, S. and Yarowsky, D. (2002). Bootstrapping a multilingual part-of-speech tagger in one person-day. In Proceedings of the 6th Conference on Natural Language Learning -Volume 20, COLING-02, pages 1-7, Strouds- burg, PA, USA. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Unsupervised part-ofspeech tagging with bilingual graph-based projections", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "600--609", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Das, D. and Petrov, S. (2011). Unsupervised part-of- speech tagging with bilingual graph-based projections. In Proceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Language Technologies, pages 600-609, Portland, Oregon, USA, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Simpler unsupervised POS tagging with bilingual projections", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Duong", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Bird", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Cook", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pecina", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "634--639", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Duong, T., Bird, S., Cook, P., and Pecina, P. (2013). Sim- pler unsupervised POS tagging with bilingual projec- tions. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, number Vol- ume 2: Short Papers, pages 634-639, Sofija, Bulgaria. B\u0203lgarska akademija na naukite, Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Automatically inducing a part-of-speech tagger by projecting from multiple source languages across aligned corpora", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Fossum", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "P" |
| ], |
| "last": "Abney", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Natural Language Processing -IJC-NLP 2005, Second International Joint Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "862--873", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fossum, V. and Abney, S. P. (2005). Automatically in- ducing a part-of-speech tagger by projecting from mul- tiple source languages across aligned corpora. In Robert Dale, et al., editors, Natural Language Processing -IJC- NLP 2005, Second International Joint Conference, Jeju Island, Korea, October 11-13, 2005, Proceedings, vol- ume 3651 of Lecture Notes in Computer Science, pages 862-873. Springer.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Stochastic gradient boosting", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "H" |
| ], |
| "last": "Friedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Computational Statistics & Data Analysis", |
| "volume": "38", |
| "issue": "4", |
| "pages": "367--378", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Friedman, J. H. (2002). Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4):367- 378, February.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Random decision forests", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [ |
| "K" |
| ], |
| "last": "Ho", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the Third International Conference on Document Analysis and Recognition", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ho, T. K. (1995). Random decision forests. In Proceed- ings of the Third International Conference on Document Analysis and Recognition (Volume 1) -Volume 1, ICDAR '95, pages 278-, Washington, DC, USA. IEEE Computer Society.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The random subspace method for constructing decision forests", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [ |
| "K" |
| ], |
| "last": "Ho", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", |
| "volume": "20", |
| "issue": "8", |
| "pages": "832--844", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ho, T. K. (1998). The random subspace method for con- structing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(8):832-844.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Language richness of the web", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Majli\u0161", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "And\u017eabokrtsk\u00fd", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)", |
| "volume": "", |
| "issue": "", |
| "pages": "2927--2934", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Majli\u0161, M. and\u017dabokrtsk\u00fd, Z. (2012). Language richness of the web. In Proceedings of the 8th International Con- ference on Language Resources and Evaluation (LREC 2012), pages 2927-2934,\u0130stanbul, Turkey. European Language Resources Association.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Universal dependencies v1: A multilingual treebank collection", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "M.-C", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Ginter", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Pyysalo", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Silveira", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Tsarfaty", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nivre, J., de Marneffe, M.-C., Ginter, F., Goldberg, Y., Haji\u010d, J., Manning, C., McDonald, R., Petrov, S., Pyysalo, S., Silveira, N., Tsarfaty, R., and Zeman, D. (2016). Universal dependencies v1: A multilingual tree- bank collection. In Proceedings of the 10th Interna- tional Conference on Language Resources and Evalu- ation (LREC 2016), Portoro\u017e, Slovenia. European Lan- guage Resources Association.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "A universal part-of-speech tagset", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)", |
| "volume": "", |
| "issue": "", |
| "pages": "2089--2096", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Petrov, S., Das, D., and McDonald, R. (2012). A universal part-of-speech tagset. In Proceedings of the 8th Interna- tional Conference on Language Resources and Evalua- tion (LREC 2012), pages 2089-2096,\u0130stanbul, Turkey. European Language Resources Association.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Inducing multilingual pos taggers and np bracketers via robust projection across aligned corpora", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Ngai", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics on Language Technologies, NAACL '01", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yarowsky, D. and Ngai, G. (2001). Inducing multilin- gual pos taggers and np bracketers via robust projec- tion across aligned corpora. In Proceedings of the Sec- ond Meeting of the North American Chapter of the As- sociation for Computational Linguistics on Language Technologies, NAACL '01, pages 1-8, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Cross-language parser adaptation between related languages", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Workshop on NLP for Less-Privileged Languages, IJCNLP, Hyderabad", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeman, D. and Resnik, P. (2008). Cross-language parser adaptation between related languages. In Workshop on NLP for Less-Privileged Languages, IJCNLP, Hyder- abad, India.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "HamleDT: Harmonized multi-language dependency treebank", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Du\u0161ek", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Mare\u010dek", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Popel", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Ramasamy", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "\u0160t\u011bp\u00e1nek", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "\u017dabokrtsk\u00fd", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Language Resources and Evaluation", |
| "volume": "48", |
| "issue": "4", |
| "pages": "601--637", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeman, D., Du\u0161ek, O., Mare\u010dek, D., Popel, M., Ra- masamy, L.,\u0160t\u011bp\u00e1nek, J.,\u017dabokrtsk\u00fd, Z., and Haji\u010d, J. (2014). HamleDT: Harmonized multi-language depen- dency treebank. Language Resources and Evaluation, 48(4):601-637.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "text": "Distribution of manually assigned POS tags and predicted POS tags. The numbers are computed over all the 19 testing corpora (i.e., excluding c7).", |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "uris": null, |
| "text": "3: Confusion matrix of the best classifier, evaluated on all target languages (sum). Rows correspond to goldstandard tags, columns to predicted tags. NO = NOUN; VB = VERB; AJ = ADJ; AV = ADV; PR = PRON; DT = DET; NU = NUM; AP = ADP; CJ = CONJ; PT = PRT; PU = PUNC.", |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "num": null, |
| "uris": null, |
| "text": "Learning curves for different sizes of texts, on which the features for individual test-set words were computed.", |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "content": "<table><tr><td>target</td><td/><td/><td/><td/><td/><td/><td>source</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>bg</td><td>ca</td><td>cs</td><td>de</td><td>el</td><td>en</td><td>hi</td><td>hu</td><td>it</td><td>pt</td><td>ru</td><td>sv</td><td>tr</td><td>avg</td><td>c7</td></tr><tr><td colspan=\"16\">bg 86.6 cs 68.5 45.4 84.3 63.3 56.2 63.3 50.5 58.4 53.2 47.7 54.4 63.7 50.7 56.3 65.6</td></tr><tr><td>da</td><td colspan=\"15\">61.7 47.7 52.1 55.1 42.1 66.4 40.1 40.6 50.5 53.0 32.8 75.0 41.0 50.6 57.3</td></tr><tr><td>de</td><td colspan=\"15\">55.6 49.5 61.9 91.0 53.7 69.9 46.6 57.7 56.5 59.2 47.4 66.1 53.5 56.5 83.5</td></tr><tr><td>el</td><td colspan=\"15\">50.5 58.9 49.7 47.9 87.0 40.1 38.5 55.2 65.0 57.2 42.7 48.3 38.0 49.3 78.5</td></tr><tr><td>en</td><td colspan=\"15\">54.5 46.8 57.3 60.8 51.5 86.0 50.9 46.1 52.2 49.5 41.0 66.1 56.1 52.7 62.6</td></tr><tr><td>es</td><td colspan=\"15\">58.8 74.6 49.6 47.7 61.6 54.5 51.3 52.1 75.4 79.8 37.3 50.8 38.7 56.3 67.5</td></tr><tr><td>et</td><td colspan=\"15\">53.7 39.0 59.3 57.1 45.7 41.9 38.9 54.9 51.0 44.8 39.2 58.3 54.2 49.1 64.1</td></tr><tr><td>eu</td><td colspan=\"15\">35.7 41.3 47.0 57.2 34.6 48.4 46.7 46.8 39.5 43.6 22.1 47.1 54.5 43.4 62.0</td></tr><tr><td>fa</td><td colspan=\"15\">37.6 41.4 46.9 49.2 33.9 49.7 65.4 25.3 42.5 42.7 37.2 39.5 54.8 43.5 65.9</td></tr><tr><td>fi</td><td colspan=\"15\">43.9 27.8 51.4 46.8 41.3 37.4 41.3 45.5 38.5 30.6 37.1 45.3 50.3 41.3 51.4</td></tr><tr><td>hi</td><td colspan=\"15\">48.6 63.1 40.3 40.2 31.2 55.0 90.6 31.7 47.8 40.2 46.8 38.8 41.8 43.8 86.5</td></tr><tr><td>hu</td><td colspan=\"15\">44.0 54.4 57.6 53.8 54.5 38.7 37.2 81.2 52.1 50.8 35.2 49.7 50.6 48.2 73.5</td></tr><tr><td>it</td><td colspan=\"15\">58.2 67.3 59.0 58.2 62.0 61.4 49.1 51.1 88.5 70.8 47.1 54.7 44.1 56.9 70.2</td></tr><tr><td>la</td><td colspan=\"15\">30.4 28.0 49.7 43.5 32.4 36.7 39.3 39.6 31.7 26.1 41.9 37.5 49.7 37.4 51.1</td></tr><tr><td>nl</td><td colspan=\"15\">53.0 54.0 55.0 66.1 56.8 56.0 40.9 62.0 62.2 59.1 40.4 58.3 41.4 54.2 60.0</td></tr><tr><td>pt</td><td colspan=\"15\">61.9 55.1 50.2 51.8 49.7 48.1 47.7 54.9 74.4 84.9 43.0 48.6 41.8 52.3 65.1</td></tr><tr><td>ro</td><td colspan=\"15\">50.9 42.3 46.7 50.0 43.1 52.4 57.1 42.9 62.9 59.3 54.8 39.6 41.1 49.5 57.2</td></tr><tr><td>ru</td><td colspan=\"15\">45.2 22.9 51.5 40.8 33.7 36.4 44.6 38.1 37.3 30.1 70.8 40.0 37.7 38.2 43.4</td></tr><tr><td>sk</td><td colspan=\"15\">60.6 38.2 70.7 54.6 46.6 44.4 41.7 44.8 44.2 46.8 45.8 51.8 41.4 48.6 56.0</td></tr><tr><td>sl</td><td colspan=\"15\">59.1 41.0 58.9 55.1 48.4 47.9 35.9 45.8 53.3 49.3 30.1 61.3 44.6 48.5 59.4</td></tr><tr><td>sv</td><td colspan=\"15\">63.3 46.8 56.5 62.1 45.0 64.5 39.5 45.0 50.4 50.8 43.3 80.5 41.9 50.8 63.0</td></tr><tr><td>te</td><td colspan=\"15\">28.0 26.0 39.5 59.3 26.8 41.0 49.9 41.2 32.0 40.7 33.7 37.0 62.3 39.8 57.0</td></tr><tr><td>tr</td><td colspan=\"15\">28.2 26.5 41.8 48.8 24.2 37.4 39.0 44.0 26.9 33.3 26.7 33.4 77.6 34.2 70.9</td></tr></table>", |
| "html": null, |
| "text": "43.2 59.0 53.9 54.9 53.9 53.8 46.2 52.2 58.3 43.0 58.9 40.7 51.5 75.2 bn 27.6 34.0 38.7 41.1 26.7 41.5 52.2 36.2 32.3 39.5 23.8 35.7 51.7 37.0 60.8 ca 46.9 84.6 52.5 47.1 50.8 43.6 45.6 51.2 65.9 70.0 37.3 44.3 38.3 49.5 74.6 Table", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "content": "<table/>", |
| "html": null, |
| "text": "Results of different classifiers and their average. All classifiers in this table were trained on c7 (combination of bg, ca, de, el, hi, hu, and tr), and they were evaluated on languages outside of c7.", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "content": "<table/>", |
| "html": null, |
| "text": "", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "content": "<table/>", |
| "html": null, |
| "text": "The 107 languages in Deltacorpus. Languages from W2C (target languages) are identified by their ISO 639-3 code. Two-letter codes are used to identify languages in HamleDT (source languages). Language family abbre-", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |