ACL-OCL / Base_JSON /prefixD /json /D13 /D13-1032.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D13-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:41:16.657268Z"
},
"title": "Efficient Higher-Order CRFs for Morphological Tagging",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Munich",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Munich",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Munich",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Training higher-order conditional random fields is prohibitive for huge tag sets. We present an approximated conditional random field using coarse-to-fine decoding and early updating. We show that our implementation yields fast and accurate morphological taggers across six languages with different morphological properties and that across languages higher-order models give significant improvements over 1 st-order models.",
"pdf_parse": {
"paper_id": "D13-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "Training higher-order conditional random fields is prohibitive for huge tag sets. We present an approximated conditional random field using coarse-to-fine decoding and early updating. We show that our implementation yields fast and accurate morphological taggers across six languages with different morphological properties and that across languages higher-order models give significant improvements over 1 st-order models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Conditional Random Fields (CRFs) (Lafferty et al., 2001) are arguably one of the best performing sequence prediction models for many Natural Language Processing (NLP) tasks. During CRF training forward-backward computations, a form of dynamic programming, dominate the asymptotic runtime. The training and also decoding times thus depend polynomially on the size of the tagset and exponentially on the order of the CRF. This probably explains why CRFs, despite their outstanding accuracy, normally only are applied to tasks with small tagsets such as Named Entity Recognition and Chunking; if they are applied to tasks with bigger tagsets -e.g., to part-of-speech (POS) tagging for English -then they generally are used as 1 st -order models.",
"cite_spans": [
{
"start": 33,
"end": 56,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we demonstrate that fast and accurate CRF training and tagging is possible for large tagsets of even thousands of tags by approximating the CRF objective function using coarse-to-fine decoding (Charniak and Johnson, 2005; Rush and Petrov, 2012) . Our pruned CRF (PCRF) model has much smaller runtime than higher-order CRF models and may thus lead to an even broader application of CRFs across NLP tagging tasks.",
"cite_spans": [
{
"start": 208,
"end": 236,
"text": "(Charniak and Johnson, 2005;",
"ref_id": "BIBREF2"
},
{
"start": 237,
"end": 259,
"text": "Rush and Petrov, 2012)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use POS tagging and combined POS and morphological (POS+MORPH) tagging to demonstrate the properties and benefits of our approach. POS+MORPH disambiguation is an important preprocessing step for syntactic parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is usually tackled by applying sequence prediction. POS+MORPH tagging is also a good example of a task where CRFs are rarely applied as the tagsets are often so big that even 1 st -order dynamic programming is too expensive. A workaround is to restrict the possible tag candidates per position by using either morphological analyzers (MAs), dictionaries or heuristics (Haji\u010d, 2000) . In this paper, however we show that when using pruning (i.e., PCRFs), CRFs can be trained in reasonable time, which makes hard constraints unnecessary.",
"cite_spans": [
{
"start": 371,
"end": 384,
"text": "(Haji\u010d, 2000)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we run successful experiments on six languages with different morphological properties; we interpret this as evidence that our approach is a general solution to the problem of POS+MORPH tagging. The tagsets in our experiments range from small sizes of 12 to large sizes of up to 1811. We will see that even for the smallest tagset, PCRFs need only 40% of the training time of standard CRFs. For the bigger tagset sizes we can reduce training times from several days to several hours. We will also show that training higher-order PCRF models takes only several minutes longer than training 1 st -order models and -depending on the language -may lead to substantial accuracy im- provements. For example in German POS+MORPH tagging, a 1 st -order model (trained in 32 minutes) achieves an accuracy of 88.96 while a 3 rd -order model (trained in 35 minutes) achieves an accuracy of 90.60. The remainder of the paper is structured as follows: Section 2 describes our CRF implementation 1 and the feature set used. Section 3 summarizes related work on tagging with CRFs, efficient CRF tagging and coarse-to-fine decoding. Section 4 describes experiments on POS tagging and POS+MORPH tagging and Section 5 summarizes the main contributions of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In a standard CRF we model our sentences using a globally normalized log-linear model. The probability of a tag sequence y given a sentence x is then given as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard CRF Training",
"sec_num": "2.1"
},
{
"text": "p( y| x) = exp t,i \u03bb i \u2022 \u03c6 i ( y, x, t) Z( \u03bb, x) Z( \u03bb, x) = y exp t,i \u03bb i \u2022 \u03c6 i ( y, x, t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard CRF Training",
"sec_num": "2.1"
},
{
"text": "Where t and i are token and feature indexes, \u03c6 i is a feature function, \u03bb i is a feature weight and Z is a normalization constant. During training the feature weights \u03bb are set to maximize the conditional loglikelihood of the training data D:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard CRF Training",
"sec_num": "2.1"
},
{
"text": "ll D ( \u03bb) = ( x, y)\u2208D log p( y| x, \u03bb)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard CRF Training",
"sec_num": "2.1"
},
{
"text": "In order to use numerical optimization we have to calculate the gradient of the log-likelihood, which is a vector of partial derivatives \u2202ll D ( \u03bb)/\u2202\u03bb i . For a training sentence x, y and a token index t the derivative wrt feature i is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard CRF Training",
"sec_num": "2.1"
},
{
"text": "\u03c6 i ( y, x, t) \u2212 y \u03c6 i ( y , x, t) p( y | x, \u03bb)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard CRF Training",
"sec_num": "2.1"
},
{
"text": "This is the difference between the empirical feature count in the training data and the estimated count in the current model \u03bb. For a 1 st -order model, we can replace the expensive sum over all possible tag sequences y by a sum over all pairs of tags:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard CRF Training",
"sec_num": "2.1"
},
{
"text": "\u03c6 i (y t , y t+1 , x, t) \u2212 y,y \u03c6 i (y, y , x, t) p(y, y | x, \u03bb)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard CRF Training",
"sec_num": "2.1"
},
{
"text": "The probability of a tag pair p(y, y | x, \u03bb) can then be calculated efficiently using the forward-backward algorithm. If we further reduce the complexity of the model to a 0-order model, we obtain simple maximum entropy model updates:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard CRF Training",
"sec_num": "2.1"
},
{
"text": "\u03c6 i (y t , x, t) \u2212 y \u03c6 i (y, x, t) p(y| x, \u03bb)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard CRF Training",
"sec_num": "2.1"
},
{
"text": "As we discussed in the introduction, we want to decode sentences by applying a variant of coarse-tofine tagging. Naively, to later tag with n th -order accuracy we would train a series of n CRFs of increasing order. We would then use the CRF of order n \u2212 1 to restrict the input of the CRF of order n. In this paper we approximate this approach, but do so while training only one integrated model. This way we can save both memory (by sharing feature weights between different models) and training time (by saving lower-order updates).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "The main idea of our approach is to create increasingly complex lattices and to filter candidate states at every step to prevent a polynomial increase in lattice size. The first step is to create a 0-order lattice, which as discussed above, is identical to a series of independent local maximum entropy models p(y| x, t). The models base their prediction on the current word x t and the immediate lexical context. We then calculate the posterior probabilities and remove states y with p(y| x, t) < \u03c4 0 from the lattice, where \u03c4 0 is a parameter. The resulting reduced lattice is similar to what we would obtain using candidate selection based on an MA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "We can now create a first order lattice by adding transitions to the pruned lattice and pruning with threshold \u03c4 1 . The only difference to 0-order pruning is that we now have to run forward-backward to calculate the probabilities p(y| x, t). Note that in theory we could also apply the pruning to transition probabilities of the form p(y, y | x, t); however, this does not seem to yield more accurate models and is less efficient than state pruning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "For higher-order lattices we merge pairs of states into new states, add transitions and prune with threshold \u03c4 i . We train the model using l 1 -regularized Stochastic Gradient Descent (SGD) (Tsuruoka et al., 2009) . We would like to create a cascade of increasingly complex lattices and update the weight vector with the gradient of the last lattice. The updates, however, are undefined if the gold sequence is pruned from the lattice. A solution would be to simply reinsert the gold sequence, but this yields poor results as the model never learns to keep the gold sequence in the lower-order lattices. As an alternative we perform the gradient update with the highest lattice still containing the gold sequence. This approach is similar to \"early updating\" (Collins and Roark, 2004) in perceptron learning, where during beam search an update with the highest scoring partial hypothe-1: function GETSUMLATTICE(sentence, \u03c4 ) 2:",
"cite_spans": [
{
"start": 191,
"end": 214,
"text": "(Tsuruoka et al., 2009)",
"ref_id": "BIBREF30"
},
{
"start": 760,
"end": 785,
"text": "(Collins and Roark, 2004)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "gold-tags \u2190 getTags(sentence) 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "candidates \u2190 getAllCandidates(sentence) 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "lattice \u2190 ZeroOrderLattice(candidates) 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "for i = 1 \u2192 n do 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "candidates \u2190 lattice. prune(\u03c4i\u22121) 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "if gold-tags \u2208 candidates then 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "return lattice 9: end if 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "if i > 1 then 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "candidates \u2190 mergeStates(candidates) 12: end if 13:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "candidates \u2190 addTransitions(candidates) 14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "lattice \u2190 SequenceLattice(candidates, i) 15:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "end for 16:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "return lattice 17: end function Figure 1 : Lattice generation during training sis is performed whenever the gold candidate falls out of the beam. Intuitively, we are trying to optimize an n th -order CRF objective function, but apply small lower-order corrections to the weight vector when necessary to keep the gold candidate in the lattice. Figure 1 illustrates the lattice generation process. The lattice generation during decoding is identical, except that we always return a lattice of the highest order n.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 1",
"ref_id": null
},
{
"start": 343,
"end": 351,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "The savings in training time of this integrated approach are large; e.g., training a maximum entropy model over a tagset of roughly 1800 tags and more than half a million instances is slow as we have to apply 1800 weight vector updates for every token in the training set and every SGD iteration. In the integrated model we only have to apply 1800 updates when we lose the gold sequence during filtering. Thus, in our implementation training a 0order model for Czech takes roughly twice as long as training a 1 st -order model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned CRF Training",
"sec_num": "2.2"
},
{
"text": "Our approach would not work if we were to set the parameters \u03c4 i to fixed predetermined values; e.g., the \u03c4 i depend on the size of the tagset and should be adapted during training as we start the training with a uniform model that becomes more specific. We therefore set the \u03c4 i by specifying \u00b5 i , the average number of tags per position that should remain in the lattice after pruning. This also guarantees stable lattice sizes and thus stable training times. We achieve stable average number of tags per position by setting the \u03c4 i dynamically during training: we measure the real average number of candidates per position\u03bc i and apply corrections after processing a certain fraction of the sentences of the training set. The updates are of the form: Figure 2 shows an example training run for German with \u00b5 0 = 4. Here the 0-order lattice reduces the number of tags per position from 681 to 4 losing roughly 15% of the gold sequences of the development set, which means that for 85% of the sentences the correct candidate is still in the lattice. This corresponds to more than 99% of the tokens. We can also see that after two iterations only a very small number of 0-order updates have to be performed.",
"cite_spans": [],
"ref_spans": [
{
"start": 755,
"end": 763,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Threshold Estimation",
"sec_num": "2.3"
},
{
"text": "\u03c4 i = +0.1 \u2022 \u03c4 i if\u03bc i < \u00b5 i \u22120.1 \u2022 \u03c4 i if\u03bc i > \u00b5 i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threshold Estimation",
"sec_num": "2.3"
},
{
"text": "As we discussed before for the very large POS+MORPH tagsets, most of the decoding time is spent on the 0-order level. To decrease the number of tag candidates in the 0-order model, we decode in two steps by separating the fully specified tag into a coarse-grained part-of-speech (POS) tag and a finegrained MORPH tag containing the morphological features. We then first build a lattice over POS candidates and apply our pruning strategy. In a second step we expand the remaining POS tags into all the combinations with MORPH tags that were seen in the training set. We thus build a sequence of lattices of both increasing order and increasing tag complexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tag Decomposition",
"sec_num": "2.4"
},
{
"text": "We use the features of Ratnaparkhi (1996) and Manning (2011): the current, preceding and succeeding words as unigrams and bigrams and for rare words prefixes and suffixes up to length 10, and the occurrence of capital characters, digits and special characters. We define a rare word as a word with training set frequency \u2264 10. We concatenate every feature with the POS and MORPH tag and every morphological feature. E.g., for the word \"der\", the POS tag art (article) and the MORPH tag gen|sg|fem (genitive, singular, feminine) we get the following features for the current word template: der+art, der+gen|sg|fem, der+gen, der+sg and der+fem.",
"cite_spans": [
{
"start": 23,
"end": 41,
"text": "Ratnaparkhi (1996)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set",
"sec_num": "2.5"
},
{
"text": "We also use an additional binary feature, which indicates whether the current word has been seen with the current tag or -if the word is rare -whether the tag is in a set of open tag classes. The open tag classes are estimated by 10-fold cross validation on the training set: We first use the folds to estimate how often a tag is seen with an unknown word. We then consider tags with a relative frequency \u2265 10 \u22124 as open tag classes. While this is a heuristic, it is safer to use a \"soft\" heuristic as a feature in the lattice than a hard constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set",
"sec_num": "2.5"
},
{
"text": "For some experiments we also use the output of a morphological analyzer (MA). In that case we simply use every analysis of the MA as a simple nominal feature. This approach is attractive because it does not require the output of the MA and the annotation of the treebank to be identical; in fact, it can even be used if treebank annotation and MA use completely different features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set",
"sec_num": "2.5"
},
{
"text": "Because the weight vector dimensionality is high for large tagsets and productive languages, we use a hash kernel (Shi et al., 2009) to keep the dimensionality constant. Smith et al. (2005) use CRFs for POS+MORPH tagging, but use a morphological analyzer for candidate selection. They report training times of several days and that they had to use simplified models for Czech.",
"cite_spans": [
{
"start": 114,
"end": 132,
"text": "(Shi et al., 2009)",
"ref_id": "BIBREF27"
},
{
"start": 170,
"end": 189,
"text": "Smith et al. (2005)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set",
"sec_num": "2.5"
},
{
"text": "Several methods have been proposed to reduce CRF training times. Stochastic gradient descent can be applied to reduce the training time by a factor of 5 (Tsuruoka et al., 2009) and without drastic losses in accuracy. Lavergne et al. (2010) make use of feature sparsity to significantly speed up training for moderate tagset sizes (< 100) and huge feature spaces. It is unclear if their approach would also work for huge tag sets (> 1000).",
"cite_spans": [
{
"start": 153,
"end": 176,
"text": "(Tsuruoka et al., 2009)",
"ref_id": "BIBREF30"
},
{
"start": 217,
"end": 239,
"text": "Lavergne et al. (2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Coarse-to-fine decoding has been successfully applied to CYK parsing where full dynamic programming is often intractable when big grammars are used (Charniak and Johnson, 2005) . Weiss and Taskar (2010) develop cascades of models of increasing complexity in a framework based on perceptron learning and an explicit trade-off between accuracy and efficiency. Kaji et al. (2010) propose a modified Viterbi algorithm that is still optimal but depending on task and especially for big tag sets might be several orders of magnitude faster. While their algorithm can be used to produce fast decoders, there is no such modification for the forward-backward algorithm used during CRF training.",
"cite_spans": [
{
"start": 148,
"end": 176,
"text": "(Charniak and Johnson, 2005)",
"ref_id": "BIBREF2"
},
{
"start": 179,
"end": 202,
"text": "Weiss and Taskar (2010)",
"ref_id": "BIBREF31"
},
{
"start": 358,
"end": 376,
"text": "Kaji et al. (2010)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "We run POS+MORPH tagging experiments on Arabic (ar), Czech (cs), Spanish (es), German (de) and Hungarian (hu). The following table shows the typetoken (T/T) ratio, the average number of tags of every word form that occurs more than once in the training set (A) and the number of tags of the most ambiguous word form (\u00c2):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "T/T A\u00c2 ar 0.06 2.06 17 cs 0.13 1.64 23 es 0.09 1.14 9 de 0.11 2.15 44 hu 0.11 1.11 10 Arabic is a Semitic language with nonconcatenative morphology. An additional difficulty is that vowels are often not written in Arabic script. This introduces a high number of ambiguities; on the other hand it reduces the type-token ratio, which generally makes learning easier. In this paper, we work with the transliteration of Arabic provided in the Penn Arabic Treebank. Czech is a highly inflecting Slavic language with a large number of morphological features. Spanish is a Romance language. Based on the statistics above we can see that it has few POS+MORPH ambiguities. It is also the language with the smallest tagset and the only language in our setup that -with a few exceptions -does not mark case. German is a Germanic language andbased on the statistics above -the language with the most ambiguous morphology. The reason is that it only has a small number of inflectional suffixes. The total number of nominal inflectional suffixes for example is five. A good example for a highly ambiguous suffix is \"en\", which is a marker for infinitive verb forms, for the 1 st and 3 rd person plural and for the polite 2 nd person singular. Additionally, it marks plural nouns of all cases and singular nouns in genitive, dative and accusative case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Hungarian is a Finno-Ugric language with an agglutinative morphology; this results in a high typetoken ratio, but also the lowest level of word form ambiguity among the selected languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "POS tagging experiments are run on all the languages above and also on English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For Arabic we use the Penn Arabic Treebank (Maamouri et al., 2004) , parts 1-3 in their latest versions (LDC2010T08, LDC2010T13, LDC2011T09). As training set we use parts 1 and 2 and part 3 up to section ANN20020815.0083. All consecutive sections up to ANN20021015.0096 are used as development set and the remainder as test set. We use the unvocalized and pretokenized transliterations as input. For Czech and Spanish, we use the CoNLL 2009 data sets (Haji\u010d et al., 2009) ; for German, the TIGER treebank (Brants et al., 2002) with the split from Fraser et al. 2013; for Hungarian, the Szeged treebank (Csendes et al., 2005) with the split from Farkas et al. (2012) . For English we use the Penn Treebank (Marcus et al., 1993) with the split from Toutanova et al. (2003) .",
"cite_spans": [
{
"start": 43,
"end": 66,
"text": "(Maamouri et al., 2004)",
"ref_id": "BIBREF16"
},
{
"start": 451,
"end": 471,
"text": "(Haji\u010d et al., 2009)",
"ref_id": "BIBREF10"
},
{
"start": 505,
"end": 526,
"text": "(Brants et al., 2002)",
"ref_id": "BIBREF0"
},
{
"start": 602,
"end": 624,
"text": "(Csendes et al., 2005)",
"ref_id": "BIBREF6"
},
{
"start": 645,
"end": 665,
"text": "Farkas et al. (2012)",
"ref_id": "BIBREF7"
},
{
"start": 705,
"end": 726,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF18"
},
{
"start": 747,
"end": 770,
"text": "Toutanova et al. (2003)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "4.1"
},
{
"text": "We also compute the possible POS+MORPH tags for every word using MAs. For Arabic we use the AraMorph reimplementation of Buckwalter (2002) , for Czech the \"free\" morphology (Haji\u010d, 2001) , for Spanish Freeling (Padr\u00f3 and Stanilovsky, 2012) , for German DMOR (Schiller, 1995) and for Hungarian Magyarlanc 2.0 (Zsibrita et al., 2013) .",
"cite_spans": [
{
"start": 121,
"end": 138,
"text": "Buckwalter (2002)",
"ref_id": "BIBREF1"
},
{
"start": 173,
"end": 186,
"text": "(Haji\u010d, 2001)",
"ref_id": "BIBREF12"
},
{
"start": 210,
"end": 239,
"text": "(Padr\u00f3 and Stanilovsky, 2012)",
"ref_id": "BIBREF20"
},
{
"start": 258,
"end": 274,
"text": "(Schiller, 1995)",
"ref_id": "BIBREF23"
},
{
"start": 308,
"end": 331,
"text": "(Zsibrita et al., 2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "4.1"
},
{
"text": "To compare the training and decoding times we run all experiments on the same test machine, which features two Hexa-Core Intel Xeon X5680 CPUs with 3,33 GHz and 6 cores each and 144 GB of memory. The baseline tagger and our PCRF implementation are run single threaded. 2 The taggers are implemented in different programming languages and with different degrees of optimization; still, the run times are indicative of comparative performance to be expected in practice.",
"cite_spans": [
{
"start": 269,
"end": 270,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.2"
},
{
"text": "Our Java implementation is always run with 10 SGD iterations and a regularization parameter of 0.1, which for German was the optimal value out of {0, 0.01, 0.1, 1.0}. We follow Tsuruoka et al. (2009) in our implementation of SGD and shuffle the training set between epochs. All numbers shown are averages over 5 independent runs. Where not noted otherwise, we use \u00b5 0 = 4, \u00b5 1 = 2 and \u00b5 2 = 1.5. We found that higher values do not consistently increase performance on the development set, but result in much higher training times.",
"cite_spans": [
{
"start": 177,
"end": 199,
"text": "Tsuruoka et al. (2009)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.2"
},
{
"text": "In a first experiment we evaluate the speed and accuracy of CRFs and PCRFs on the POS tagsets. As shown in Table 1 the tagset sizes range from 12 for Czech and Spanish to 54 and 57 for German and Hungarian, with Arabic (38) and English (45) in between. The results of our experiments are given in Table 2 . For the 1 st -order models, we observe speed-ups in training time from 2.3 to 31 at no loss in accuracy. For all languages, training pruned higher-order models is faster than training unpruned 1 st -order models and yields more accurate models. Accuracy improvements range from 0.08 for Hungarian to 0.25 for German. We can conclude that for small and medium tagset sizes PCRFs give substantial improvements in both training and decoding speed 3 and thus allow for higher-order tagging, which for all languages leads to significant 4 accuracy improvements.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 114,
"text": "Table 1",
"ref_id": null
},
{
"start": 297,
"end": 304,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "POS Experiments",
"sec_num": "4.3"
},
{
"text": "Ideally, for the full POS+MORPH tagset we would also compare our results to an unpruned CRF, but our implementation turned out to be too slow to do the required number of experiments. For German, the model processed \u2248 0.1 sentences per second during training; so running 10 SGD iterations on the 40,472 sentences would take more than a month. We therefore compare our model against models that perform oracle pruning, which means we perform standard pruning, but always keep the gold candidate in the lattice. The oracle pruning is applied during training and testing on the development set. The oracle model performance is thus an upper bound for the performance of an unpruned CRF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS+MORPH Oracle Experiments",
"sec_num": "4.4"
},
{
"text": "The most interesting pruning step happens at the 0-order level when we reduce from hundreds of candidates to just a couple. Table 3 shows the results for 1 st -order CRFs.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "POS+MORPH Oracle Experiments",
"sec_num": "4.4"
},
{
"text": "We can roughly group the five languages into three groups: for Spanish and Hungarian the damage is negligible, for Arabic we see a small decrease of 0.07 and only for Czech and German we observe considerable differences of 0.14 and 0.37. Surprisingly, doubling the number of candidates per position does not lead to significant improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS+MORPH Oracle Experiments",
"sec_num": "4.4"
},
{
"text": "We can conclude that except for Czech and German losses due to pruning are insignificant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS+MORPH Oracle Experiments",
"sec_num": "4.4"
},
{
"text": "One argument for PCRFs is that while they might be less accurate than standard CRFs they allow to train higher-order models, which in turn might be more accurate than their standard lower-order counterparts. In this section, we investigate how big the improvements of higher-order models are. The results are given in the following Table 3 : Accuracies for models with and without oracle pruning. * indicates models significantly worse than the oracle model. We see that 2 nd -order models give improvements for all languages. For Spanish and Hungarian we see minor improvements \u2264 0.1.",
"cite_spans": [],
"ref_spans": [
{
"start": 332,
"end": 339,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "POS+MORPH Higher-Order Experiments",
"sec_num": "4.5"
},
{
"text": "For Czech we see a moderate improvement of 0.61 and for Arabic and German we observe substantial improvements of 0.96 and 1.31. An analysis on the development set revealed that for all three languages, case is the morphological feature that benefits most from higher-order models. A possible explanation is that case has a high correlation with syntactic relations and is thus affected by long-distance dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS+MORPH Higher-Order Experiments",
"sec_num": "4.5"
},
{
"text": "German is the only language where fourgram models give an additional improvement over trigram models. The reason seem to be sentences with longrange dependencies, e.g., \"Die Rebellen haben kein L\u00f6segeld verlangt\" (The rebels have not demanded any ransom); \"verlangt\" (demanded) is a past particple that is separated from the auxilary verb \"haben\" (have). The 2 nd -order model does not consider enough context and misclassifies \"verlangt\" as a finite verb form, while the 3 rd -order model tags it correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS+MORPH Higher-Order Experiments",
"sec_num": "4.5"
},
{
"text": "We can also conclude that the improvements for higher-order models are always higher than the loss we estimated in the oracle experiments. More precisely we see that if a language has a low number of word form ambiguities (e.g., Hungarian) we observe a small loss during 0-order pruning but we also have to expect less of an improvement when increasing the order of the model. For languages with a high number of word form ambiguities (e.g., German) we must anticipate some loss during 0-order pruning, but we also see substantial benefits for higher-order models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS+MORPH Higher-Order Experiments",
"sec_num": "4.5"
},
{
"text": "Surprisingly, we found that higher-order PCRF models can also avoid the pruning errors of lowerorder models. Here is an example from the German data. The word \"Januar\" (January) is ambiguous: in the training set, it occurs 108 times as dative, 9 times as accusative and only 5 times as nominative. The development set contains 48 nominative instances of \"Januar\" in datelines at the end of news articles, e.g., \"TEL AVIV, 3. Januar\". For these 48 occurrences, (i) the oracle model in Table 3 selects the correct case nominative, (ii) the 1 st -order PCRF model selects the incorrect case accusative, and (iii) the 2 ndand 3 rd -order models select -unlike the 1 st -order model -the correct case nominative. Our interpretation is that the correct nominative reading is pruned from the 0-order lattice. However, the higher-order models can put less weight on 0-order features as they have access to more context to disambiguate the sequence. The lower weights of order-0 result in a more uniform posterior distribution and the nominative reading is not pruned from the lattice.",
"cite_spans": [],
"ref_spans": [
{
"start": 484,
"end": 491,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "POS+MORPH Higher-Order Experiments",
"sec_num": "4.5"
},
{
"text": "In this section we compare the improvements of higher-order models when used with MAs. Plus and minus indicate models that are significantly better or worse than MA1. We can see that the improvements due to higher-order models are orthogonal to the improvements due to MAs for all languages. This was to be expected as MAs provide additional lexical knowledge while higher-order models provide additional information about the context. For Arabic and German the improvements of higher-order models are bigger than the improvements due to MAs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Morph. Analyzers",
"sec_num": "4.6"
},
{
"text": "We use the following baselines: SVMTool (Gim\u00e9nez and M\u00e0rquez, 2004) , an SVM-based discriminative tagger; RFTagger (Schmid and Laws, 2008) , an n-gram Hidden Markov Model (HMM) tagger developed for POS+MORPH tagging; Morfette (Chrupa\u0142a et al., 2008) , an averaged perceptron with beam search decoder; CRFSuite (Okazaki, 2007) , a fast CRF implementation; and the Stanford Tagger (Toutanova et al., 2003) , a bidirectional Maximum Entropy Markov Model. For POS+MORPH tagging, all baselines are trained on the concatenation of POS tag and MORPH tag. We run SVM-Tool with the standard feature set and the optimal c-values \u2208 {0.1, 1, 10}. Morfette is run with the default options. For CRFSuite we use l 2 -regularized SGD training. We use the optimal regularization parameter \u2208 {0.01, 0.1, 1.0} and stop after 30 iterations where we reach a relative improvement in regularized likelihood of at most 0.01 for all languages. The feature set is identical to our model except for some restrictions: we only use concatenations with the full tag and we do not use the binary feature that indicates whether a word-tag combination has been observed. We also had to restrict the combinations of tag and features to those observed in the training set 5 . Otherwise the memory requirements would exceed the memory of our test machine (144 GB) for Czech and Hungarian. The Stanford Tagger is used 5 We set the CRFSuite option possible states = 0 as a bidirectional 2 nd -order model and trained using OWL-BFGS. For Arabic, German and English we use the language specific feature sets and for the other languages the English feature set.",
"cite_spans": [
{
"start": 40,
"end": 67,
"text": "(Gim\u00e9nez and M\u00e0rquez, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 115,
"end": 138,
"text": "(Schmid and Laws, 2008)",
"ref_id": "BIBREF24"
},
{
"start": 226,
"end": 249,
"text": "(Chrupa\u0142a et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 310,
"end": 325,
"text": "(Okazaki, 2007)",
"ref_id": "BIBREF19"
},
{
"start": 379,
"end": 403,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF29"
},
{
"start": 1381,
"end": 1382,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Baselines",
"sec_num": "4.7"
},
{
"text": "Development set results for POS tagging are shown in Table 4 . We can observe that Morfette, CRFSuite and the PCRF models for different orders have training times in the same order of magnitude. For Arabic, Czech and English, the PCRF accuracy is comparable to the best baseline models. For the other languages we see improvements of 0.13 for Spanish, 0.18 for Hungarian and 0.24 for German. Evaluation on the test set confirms these results, see Table 5 . 6 The POS+MORPH tagging development set results are presented in Table 6 . Morfette is the fastest discriminative baseline tagger. In comparison with Morfette the speed up for 3 rd -order PCRFs lies between 1.7 for Czech and 5 for Arabic. Morfette gives the best baseline results for Arabic, Spanish and Hungarian and CRFSuite for Czech and German. The accuracy improvements of the best PCRF models over the best baseline models range from 0.27 for Spanish over 0.58 for Hungarian, 1.91 for Arabic, 1.96 for Czech to 2.82 for German. The test set experiments in Table 7 confirm these results.",
"cite_spans": [
{
"start": 457,
"end": 458,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 447,
"end": 454,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 522,
"end": 529,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 1019,
"end": 1026,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Comparison with Baselines",
"sec_num": "4.7"
},
{
"text": "We presented the pruned CRF (PCRF) model for very large tagsets. The model is based on coarse-tofine decoding and stochastic gradient descent training with early updating. We showed that for moderate tagset sizes of \u2248 50, the model gives significant speed-ups over a standard CRF with negligible losses in accuracy. Furthermore, we showed that training and tagging for approximated trigram and fourgram models is still faster than standard 1 storder tagging, but yields significant improvements in accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In oracle experiments with POS+MORPH tagsets we demonstrated that the losses due to our approximation depend on the word level ambiguity of the respective language and are moderate (\u2264 0.14) except for German where we observed a loss of 0.37.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We also showed that higher order tagging -which is prohibitive for standard CRF implementationsyields significant improvements over unpruned 1 storder models. Analogous to the oracle experiments we observed big improvements for languages with a high level of POS+MORPH ambiguity such as German and smaller improvements for languages with less ambiguity such as Hungarian and Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our java implementation MarMoT is available at https://code.google.com/p/cistern/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our tagger might actually use more than one core because the Java garbage collection is run in parallel.3 Decoding speeds are provided in an appendix submitted separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Throughout the paper we establish significance by running approximate randomization tests on sentences(Yeh, 2000).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Gim\u00e9nez and M\u00e0rquez (2004) report an accuracy of 97.16 instead of 97.12 for SVMTool for English and Manning (2011) an accuracy of 97.29 instead of 97.28 for the Stanford tagger.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The first author is a recipient of the Google Europe Fellowship in Natural Language Processing, and this research is supported in part by this Google Fellowship. This research was also funded by DFG (grant SFB 732).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The TIGER treebank",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Dipper",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Lezius",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the workshop on treebanks and linguistic theories",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolfgang Lezius, and George Smith. 2002. The TIGER tree- bank. In Proceedings of the workshop on treebanks and linguistic theories.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Buckwalter Arabic Morphological Analyzer Version 1.0. Linguistic Data Consortium",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Buckwalter",
"suffix": ""
}
],
"year": 2002,
"venue": "LDC Catalog",
"volume": "",
"issue": "",
"pages": "2002--2051",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Buckwalter. 2002. Buckwalter Arabic Morpholog- ical Analyzer Version 1.0. Linguistic Data Consor- tium, University of Pennsylvania, 2002. LDC Catalog No.: LDC2002L49.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Coarse-tofine n-best parsing and MaxEnt discriminative reranking",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to- fine n-best parsing and MaxEnt discriminative rerank- ing. In Proceedings of ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning morphology with Morfette",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Chrupa\u0142a, Georgiana Dinu, and Josef van Gen- abith. 2008. Learning morphology with Morfette. In Proceedings of LREC.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Incremental parsing with the perceptron algorithm",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden Markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of EMNLP.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Szeged treebank",
"authors": [
{
"first": "D\u00f3ra",
"middle": [],
"last": "Csendes",
"suffix": ""
},
{
"first": "J\u00e1nos",
"middle": [],
"last": "Csirik",
"suffix": ""
},
{
"first": "Tibor",
"middle": [],
"last": "Gyim\u00f3thy",
"suffix": ""
},
{
"first": "Andr\u00e1s",
"middle": [],
"last": "Kocsor",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Text, Speech and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D\u00f3ra Csendes, J\u00e1nos Csirik, Tibor Gyim\u00f3thy, and Andr\u00e1s Kocsor. 2005. The Szeged treebank. In Proceedings of Text, Speech and Dialogue.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Dependency parsing of Hungarian: Baseline results and challenges",
"authors": [
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Vincze",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich\u00e1rd Farkas, Veronika Vincze, and Helmut Schmid. 2012. Dependency parsing of Hungarian: Baseline re- sults and challenges. In Proceedings of EACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Knowledge Sources for Constituent Parsing of German, a Morphologically Rich and Less-Configurational Language",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Renjing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Fraser, Helmut Schmid, Rich\u00e1rd Farkas, Ren- jing Wang, and Hinrich Sch\u00fctze. 2013. Knowl- edge Sources for Constituent Parsing of German, a Morphologically Rich and Less-Configurational Lan- guage. Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Svmtool: A general POS tagger generator based on Support Vector Machines",
"authors": [
{
"first": "Jes\u00fas",
"middle": [],
"last": "Gim\u00e9nez",
"suffix": ""
},
{
"first": "Lluis",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jes\u00fas Gim\u00e9nez and Lluis M\u00e0rquez. 2004. Svmtool: A general POS tagger generator based on Support Vector Machines. In Proceedings of LREC.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"Ant\u00f2nia"
],
"last": "Mart\u00ed",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jan\u0161t\u011bp\u00e1nek",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan\u0160t\u011bp\u00e1nek, et al. 2009. The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of CoNLL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Morphological tagging: Data vs. dictionaries",
"authors": [],
"year": 2000,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d. 2000. Morphological tagging: Data vs. dictio- naries. In Proceedings of NAACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Czech \"Free",
"authors": [],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d. 2001. Czech \"Free\" Morphology. URL http://ufal.mff.cuni.cz/pdt/Morphology and Tagging.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Efficient staggered decoding for sequence labeling",
"authors": [
{
"first": "Nobuhiro",
"middle": [],
"last": "Kaji",
"suffix": ""
},
{
"first": "Yasuhiro",
"middle": [],
"last": "Fujiwara",
"suffix": ""
},
{
"first": "Naoki",
"middle": [],
"last": "Yoshinaga",
"suffix": ""
},
{
"first": "Masaru",
"middle": [],
"last": "Kitsuregawa",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nobuhiro Kaji, Yasuhiro Fujiwara, Naoki Yoshinaga, and Masaru Kitsuregawa. 2010. Efficient staggered de- coding for sequence labeling. In Proceedings of ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic mod- els for segmenting and labeling sequence data. In Pro- ceedings of ICML.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Practical very large scale CRFs",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Lavergne",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Capp\u00e9",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Lavergne, Olivier Capp\u00e9, and Fran\u00e7ois Yvon. 2010. Practical very large scale CRFs. In Proceed- ings of ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Penn Arabic treebank: Building a large-scale annotated Arabic corpus",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Maamouri",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Buckwalter",
"suffix": ""
},
{
"first": "Wigdan",
"middle": [],
"last": "Mekki",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of NEMLAR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Maamouri, Ann Bies, Tim Buckwalter, and Wigdan Mekki. 2004. The Penn Arabic treebank: Building a large-scale annotated Arabic corpus. In Proceedings of NEMLAR.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Part-of-speech tagging from 97% to 100%: Is it time for some linguistics?",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning. 2011. Part-of-speech tagging from 97% to 100%: Is it time for some linguistics? In Computational Linguistics and Intelligent Text Pro- cessing. Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Mary A. Marcinkiewicz, and Beat- rice Santorini. 1993. Building a large annotated cor- pus of English: The Penn Treebank. Computational linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Crfsuite: A fast implementation of conditional random fields (CRFs",
"authors": [
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naoaki Okazaki. 2007. Crfsuite: A fast implemen- tation of conditional random fields (CRFs). URL http://www.chokkan.org/software/crfsuite.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Freeling 3.0: Towards Wider Multilinguality",
"authors": [
{
"first": "Llu\u00eds",
"middle": [],
"last": "Padr\u00f3",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Stanilovsky",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Llu\u00eds Padr\u00f3 and Evgeny Stanilovsky. 2012. Freeling 3.0: Towards Wider Multilinguality. In Proceedings of LREC.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A maximum entropy model for part-of-speech tagging",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1996,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Vine pruning for efficient multi-pass dependency parsing",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush and Slav Petrov. 2012. Vine pruning for efficient multi-pass dependency parsing. In Pro- ceedings of NAACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "DMOR Benutzerhandbuch. Universit\u00e4t Stuttgart, Institut f\u00fcr maschinelle Sprachverarbeitung",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Schiller",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Schiller. 1995. DMOR Benutzerhandbuch. Uni- versit\u00e4t Stuttgart, Institut f\u00fcr maschinelle Sprachver- arbeitung.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Estimation of conditional probabilities with decision trees and an application to fine-grained POS tagging",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Laws",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Schmid and Florian Laws. 2008. Estimation of conditional probabilities with decision trees and an ap- plication to fine-grained POS tagging. In Proceedings of COLING.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Probabilistic part-of-speech tagging using decision trees",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of NEMLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Schmid. 1994. Probabilistic part-of-speech tag- ging using decision trees. In Proceedings of NEMLP.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Guided Learning for Bidirectional Sequence Classification",
"authors": [
{
"first": "Libin",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Libin Shen, Giorgio Satta, and Aravind Joshi. 2007. Guided Learning for Bidirectional Sequence Classifi- cation. In Proceedings of ACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Hash Kernels for Structured Data",
"authors": [
{
"first": "Qinfeng",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Petterson",
"suffix": ""
},
{
"first": "Gideon",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "S",
"middle": [
"V N"
],
"last": "Vishwanathan",
"suffix": ""
}
],
"year": 2009,
"venue": "J. Mach. Learn. Res",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qinfeng Shi, James Petterson, Gideon Dror, John Lang- ford, Alex Smola, and S.V.N. Vishwanathan. 2009. Hash Kernels for Structured Data. J. Mach. Learn. Res.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Context-based morphological disambiguation with random fields",
"authors": [
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Roy",
"middle": [
"W"
],
"last": "Tromble",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith, David A. Smith, and Roy W. Tromble. 2005. Context-based morphological disambiguation with random fields. In Proceedings of EMNLP.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Feature-rich part-of-speech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Pro- ceedings of NAACL.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Stochastic gradient descent training for L1-regularized log-linear models with cumulative penalty",
"authors": [
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Tsujii",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshimasa Tsuruoka, Jun'ichi Tsujii, and Sophia Ana- niadou. 2009. Stochastic gradient descent training for L1-regularized log-linear models with cumulative penalty. In Proceedings of ACL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Structured prediction cascades",
"authors": [
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of AISTATS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Weiss and Ben Taskar. 2010. Structured predic- tion cascades. In In Proceedings of AISTATS.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "More accurate tests for the statistical significance of result differences",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Yeh",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Yeh. 2000. More accurate tests for the statis- tical significance of result differences. In Proceedings of COLING.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Magyarlanc 2.0: Szintaktikai elemz\u00e9s\u00e9s felgyors\u00edtott sz\u00f3faji egy\u00e9rtelm\u0171s\u00edt\u00e9s",
"authors": [
{
"first": "J\u00e1nos",
"middle": [],
"last": "Zsibrita",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Vincze",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
}
],
"year": 2013,
"venue": "IX. Magyar Sz\u00e1m\u00edt\u00f3g\u00e9pes Nyelv\u00e9szeti Konferencia",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00e1nos Zsibrita, Veronika Vincze, and Rich\u00e1rd Farkas. 2013. Magyarlanc 2.0: Szintaktikai elemz\u00e9s\u00e9s fel- gyors\u00edtott sz\u00f3faji egy\u00e9rtelm\u0171s\u00edt\u00e9s. In IX. Magyar Sz\u00e1m\u00edt\u00f3g\u00e9pes Nyelv\u00e9szeti Konferencia.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Example training run of a pruned 1 st -order model on German showing the fraction of pruned gold sequences (= sentences) during training for training (train) and development sets (dev).",
"num": null
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>ar</td><td>cs</td><td>es</td><td>de</td><td>hu</td></tr><tr><td colspan=\"2\">1 Oracle \u00b5 0 = 4 90.97 92.59</td><td colspan=\"2\">97.91 89.33</td><td>96.48</td></tr><tr><td colspan=\"5\">2 Model \u00b5 0 = 4 90.90 92.45* 97.95 88.96* 96.47</td></tr><tr><td colspan=\"5\">3 Model \u00b5 0 = 8 90.89 92.48* 97.94 88.94* 96.47</td></tr></table>",
"text": "POS tagging experiments with pruned and unpruned CRFs with different orders n. For every language the training time in minutes (TT) and the POS accuracy (ACC) are given. * indicates models significantly better than CRF (first line)."
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"7\">Best baseline results are underlined and the overall best results bold. * indicates a significant difference (positive or</td></tr><tr><td colspan=\"3\">negative) between the best baseline and a PCRF model.</td><td/><td/><td/><td/></tr><tr><td/><td>ar</td><td>cs</td><td>es</td><td>de</td><td>hu</td><td>en</td></tr><tr><td colspan=\"2\">SVMTool 96.19</td><td>98.82</td><td>98.44</td><td>96.44</td><td>97.32</td><td>97.12</td></tr><tr><td>Morfette</td><td>95.55</td><td>98.91</td><td>98.41</td><td>96.68</td><td>97.28</td><td>96.89</td></tr><tr><td colspan=\"2\">CRFSuite 95.97</td><td>98.91</td><td>98.40</td><td>96.82</td><td>97.32</td><td>96.94</td></tr><tr><td>Stanford</td><td>95.75</td><td>98.99</td><td>98.50</td><td>97.09</td><td>97.32</td><td>97.28</td></tr><tr><td>PCRF 1</td><td colspan=\"3\">96.03* 98.83* 98.46</td><td>97.11</td><td colspan=\"2\">97.44* 97.09*</td></tr><tr><td>PCRF 2</td><td>96.11</td><td colspan=\"5\">98.88* 98.66* 97.36* 97.50* 97.23</td></tr><tr><td>PCRF 3</td><td>96.14</td><td colspan=\"5\">98.87* 98.66* 97.44* 97.49* 97.19*</td></tr></table>",
"text": "Development results for POS tagging. Given are training times in minutes (TT) and accuracies (ACC)."
},
"TABREF5": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>ar</td><td>cs</td><td>es</td><td>de</td><td>hu</td></tr><tr><td/><td>TT ACC</td><td>TT ACC</td><td>TT ACC</td><td>TT ACC</td><td>TT ACC</td></tr><tr><td colspan=\"2\">SVMTool 454 89.91</td><td>2454 89.91</td><td>64 97.63</td><td>1649 85.98</td><td>3697 95.61</td></tr><tr><td>RFTagger</td><td>4 89.09</td><td>3 90.38</td><td>1 97.44</td><td>5 87.10</td><td>10 95.06</td></tr><tr><td>Morfette</td><td>132 89.97</td><td>539 90.37</td><td>63 97.71</td><td>286 85.90</td><td>540 95.99</td></tr><tr><td colspan=\"2\">CRFSuite 309 89.33</td><td>9274 91.10</td><td>69 97.53</td><td>1295 87.78</td><td>5467 95.95</td></tr><tr><td>PCRF 1</td><td>22 90.90*</td><td colspan=\"2\">301 92.45* 25 97.95*</td><td>32 88.96*</td><td>230 96.47*</td></tr><tr><td>PCRF 2</td><td>26 91.86*</td><td colspan=\"2\">318 93.06* 32 98.01*</td><td>37 90.27*</td><td>242 96.57*</td></tr><tr><td>PCRF 3</td><td>26 91.88*</td><td colspan=\"2\">318 92.97* 35 97.87*</td><td>37 90.60*</td><td>241 96.50*</td></tr></table>",
"text": "Test results for POS tagging. Best baseline results are underlined and the overall best results bold. * indicates a significant difference between the best baseline and a PCRF model."
},
"TABREF6": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>ar</td><td>cs</td><td>es</td><td>de</td><td>hu</td></tr><tr><td colspan=\"2\">SVMTool 89.58</td><td>89.62</td><td>97.56</td><td>83.42</td><td>95.57</td></tr><tr><td colspan=\"2\">RFTagger 88.76</td><td>90.43</td><td>97.35</td><td>84.28</td><td>94.99</td></tr><tr><td>Morfette</td><td>89.62</td><td>90.01</td><td>97.58</td><td>83.48</td><td>95.79</td></tr><tr><td colspan=\"2\">CRFSuite 89.05</td><td>90.97</td><td>97.60</td><td>85.68</td><td>95.82</td></tr><tr><td>PCRF 1</td><td colspan=\"5\">90.32* 92.31* 97.82* 86.92* 96.22*</td></tr><tr><td>PCRF 2</td><td colspan=\"5\">91.29* 92.94* 97.93* 88.48* 96.34*</td></tr><tr><td>PCRF 3</td><td colspan=\"5\">91.22* 92.99* 97.82* 88.58* 96.29*</td></tr></table>",
"text": "Development results for POS+MORPH tagging. Given are training times in minutes (TT) and accuracies (ACC). Best baseline results are underlined and the overall best results bold. * indicates a significant difference between the best baseline and a PCRF model."
},
"TABREF7": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">sults are given in the following table:</td><td/></tr><tr><td>n ar</td><td>cs</td><td>es</td><td>de</td><td>hu</td></tr><tr><td>1</td><td/><td/><td/><td/></tr></table>",
"text": "Test results for POS+MORPH tagging. Best baseline results are underlined and the overall best results bold. * indicates a significant difference between the best baseline and a PCRF model. 90.90 \u2212 92.45 \u2212 97.95 \u2212 88.96 \u2212 96.47 \u2212 2 91.86 + 93.06 98.01 \u2212 90.27 + 96.57 \u2212 3 91.88 + 92.97 \u2212 97.87 \u2212 90.60 + 96.50 \u2212 MA 1 91.22 93.21 98.27 89.82 97.28 MA 2 92.16 + 93.87 + 98.37 + 91.31 + 97.51 + MA 3 92.14 + 93.88 + 98.28 91.65 + 97.48 +"
}
}
}
}