ACL-OCL / Base_JSON /prefixD /json /D17 /D17-1044.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D17-1044",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:14:31.792560Z"
},
"title": "Learning the Structure of Variable-Order CRFs: a Finite-State Perspective",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Lavergne",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 Paris Saclay Campus Universitaire",
"location": {
"postCode": "F-91 403",
"settlement": "Orsay",
"country": "France"
}
},
"email": "lavergne@limsi.fr"
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 Paris Saclay Campus Universitaire",
"location": {
"postCode": "F-91 403",
"settlement": "Orsay",
"country": "France"
}
},
"email": "yvon@limsi.fr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The computational complexity of linearchain Conditional Random Fields (CRFs) makes it difficult to deal with very large label sets and long range dependencies. Such situations are not rare and arise when dealing with morphologically rich languages or joint labelling tasks. We extend here recent proposals to consider variable order CRFs. Using an effective finitestate representation of variable-length dependencies, we propose new ways to perform feature selection at large scale and report experimental results where we outperform strong baselines on a tagging task.",
"pdf_parse": {
"paper_id": "D17-1044",
"_pdf_hash": "",
"abstract": [
{
"text": "The computational complexity of linearchain Conditional Random Fields (CRFs) makes it difficult to deal with very large label sets and long range dependencies. Such situations are not rare and arise when dealing with morphologically rich languages or joint labelling tasks. We extend here recent proposals to consider variable order CRFs. Using an effective finitestate representation of variable-length dependencies, we propose new ways to perform feature selection at large scale and report experimental results where we outperform strong baselines on a tagging task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Conditional Random Fields (CRFs) (Lafferty et al., 2001; are a method of choice for many sequence labelling tasks such as Part of Speech (PoS) tagging, Text Chunking, or Named Entity Recognition. Linearchain CRFs are easy to train by solving a convex optimization problem, can accomodate rich feature patterns, and enjoy polynomial exact inference procedures. They also deliver state-of-the-art performance for many tasks, sometimes surpassing seq2seq neural models (Schnober et al., 2016) .",
"cite_spans": [
{
"start": 33,
"end": 56,
"text": "(Lafferty et al., 2001;",
"ref_id": "BIBREF9"
},
{
"start": 466,
"end": 489,
"text": "(Schnober et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A major issue with CRFs is the complexity of training and inference procedures, which are quadratic in the number of possible output labels for first order models and grow exponentially when higher order dependencies are considered. This is problematic for tasks such as precise PoS tagging for Morphologically Rich Languages (MRLs), where the number of morphosyntactic labels is in the thousands (Haji\u010d, 2000; M\u00fcller et al., 2013) . Large label sets also naturally arise when joint labelling tasks (eg. simultaneous PoS tag-ging and text chunking) are considered, For such tasks, processing first-order models is demanding, and full size higher-order models are out of the question. Attempts to overcome this difficulty are based on a greedy approach which starts with firstorder dependencies between labels and iteratively increases the scope of dependency patterns under the constraint that a high-order dependency is selected only if it extends an existing lower order feature (M\u00fcller et al., 2013) . As a result, feature selection may only choose only few higherorder features, motivating the need for an effective variable-order CRF (voCRF) training procedure (Ye et al., 2009) . 1 The latest implementation of this idea (Vieira et al., 2016) relies on (structured) sparsity promoting regularization (Martins et al., 2011) and on finite-state techniques, handling high-order features at a small extra cost (see \u00a7 2). In this approach, the sparse set of label dependency patterns is represented in a finite-state automaton, which arises as the result of the feature selection process. In this paper, we somehow reverse the perspective and consider VoCRF training mostly as an automaton inference problem. This leads us to consider alternative techniques for learning the finitestate machine representing the dependency structure of sparse VoCRFs (see \u00a7 3). Two lines of enquiries are explored: (a) to take into account the internal structure of large tag sets in order to learn better and/or leaner feature sets; (b) to detect unconditional structural dependencies in label sequences in order to speed-up the discovery of useful features. These ideas are implemented in 6 feature selection strategies, allowing us to explore a large set of dependency structures. Relying on lazy finite-state operations, we train VoCRFs up to order 5, and achieve PoS tagging performance that surpass strong baselines for two MRLs (see \u00a7 4).",
"cite_spans": [
{
"start": 397,
"end": 410,
"text": "(Haji\u010d, 2000;",
"ref_id": "BIBREF7"
},
{
"start": 411,
"end": 431,
"text": "M\u00fcller et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 981,
"end": 1002,
"text": "(M\u00fcller et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 1166,
"end": 1183,
"text": "(Ye et al., 2009)",
"ref_id": "BIBREF28"
},
{
"start": 1186,
"end": 1187,
"text": "1",
"ref_id": null
},
{
"start": 1227,
"end": 1248,
"text": "(Vieira et al., 2016)",
"ref_id": "BIBREF27"
},
{
"start": 1306,
"end": 1328,
"text": "(Martins et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we recall the basics of CRFs and VoCRFs and introduce some notations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variable order CRFs",
"sec_num": "2"
},
{
"text": "First-order CRFs use the following model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basics",
"sec_num": "2.1"
},
{
"text": "p \u03b8 (y|x) = Z \u03b8 (x) \u22121 exp(\u03b8 T F (x, y)) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basics",
"sec_num": "2.1"
},
{
"text": "where x = (x 1 , . . . , x T ) and y = (y 1 , . . . , y T ) are the input (in X T ) and output (in Y T ) sequences and Z \u03b8 (x) is a normalizer. Each component F j (x, y) of the global feature vector decomposes as a sum of local features T t=1 f j (y t\u22121 , y t , x t ) and is associated to parameter \u03b8 j . Local features typically use binary tests and take the form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basics",
"sec_num": "2.1"
},
{
"text": "f u,g (y t\u22121 , y t , x, t) = I(y t = u \u2227 g(x, t)) f uv,g (y t\u22121 , y t , x, t) = I(y t\u22121 y t = uv \u2227 g(x, t))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basics",
"sec_num": "2.1"
},
{
"text": "where I() is an indicator function and g() tests a local property of x around x t . In this setting, the number of parameters is |Y| 2 \u00d7 |X | train , where |A| is the cardinality of A and |X | train is the number of values of g(x, t) observed in the training set. Even in moderate size applications, the parameter set can be very large and contain dozen of millions of features, due to the introduction of sequential dependencies in the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basics",
"sec_num": "2.1"
},
{
"text": "Given N i.i.d. sequences {x (i) , y (i) } N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basics",
"sec_num": "2.1"
},
{
"text": ", estimation is based on the minimization of the negated conditional log-likelihood l(\u03b8). Optimizing this objective requires to compute its gradient and to repeatedly evaluate the conditional expectation of the feature vector. This can be done using a forward-backward algorithm having a complexity that grows quadratically with |Y|. l(\u03b8) is usually complemented with a regularization term so as to avoid overfitting and stabilize the optimization. Common regularizers use the 1 -or the 2norm of the parameter vector, the former having the benefit to promote sparsity, thereby performing automatic feature selection (Tibshirani, 1996) .",
"cite_spans": [
{
"start": 616,
"end": 634,
"text": "(Tibshirani, 1996)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basics",
"sec_num": "2.1"
},
{
"text": "When the label set is large, many pairs of labels never occur in the training data and the sparsity of label ngrams quickly increases with the order p of the model. In the variable order CRF model, it is assumed that only a small number of ngrams",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variable order CRFs (VoCRFs)",
"sec_num": "2.2"
},
{
"text": "Algorithm 1: Building A[W] W : list of patterns, A[W] initially empty U = Pref(W) foreach w \u2208 W do TrieInsert(w, A[W]) // Add missing transitions foreach u = vy \u2208 U do new FailureTrans(u, LgSuff(v, U)) (out of |Y| p )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variable order CRFs (VoCRFs)",
"sec_num": "2.2"
},
{
"text": "are associated with a non-zero parameter value. Denoting W the set of such ngrams and w \u2208 W, a generic feature function is then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variable order CRFs (VoCRFs)",
"sec_num": "2.2"
},
{
"text": "f w,g (w, x, t) = I(y t\u2212s . . . y t = w \u2227 g(x, t)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variable order CRFs (VoCRFs)",
"sec_num": "2.2"
},
{
"text": "In (order-p) VoCRFs, the computational cost of training and inference is proportional to the size of a finite-state automaton A[W] encoding the patterns in W, 2 which can be much less than |Y| p . Our procedure for building A[W] is sketched in Algorithm 1, where TrieInsert inserts a string in a trie, Pref(W) computes the set of prefixes of the strings in W, 3 LgSuff(v, U) returns the longest suffix of v in U, and FailureTrans is a special \u03b5-transition used only when no labelled transition exists (Allauzen et al., 2003) . 4 Each state (or pattern prefix) v in A[W] is associated with a set of feature functions {f u,g , \u2200u \u2208 Suff(v), g}. 5 The forward step of the gradient computation maintains one value \u03b1(v, t) per state and time step, which is recursively accumulated over all paths ending in v at time t.",
"cite_spans": [
{
"start": 501,
"end": 524,
"text": "(Allauzen et al., 2003)",
"ref_id": "BIBREF0"
},
{
"start": 527,
"end": 528,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variable order CRFs (VoCRFs)",
"sec_num": "2.2"
},
{
"text": "The next question is to identify W. The simplest method keeps all the ngrams viewed in training, additionally filtering rare patterns (Cuong et al., 2014). However, frequency based feature selection does not take interactions into account and is not the best solution. Ideally, one would like to train a complete order-p model with a sparsity promoting penalty, a technique that only works for small label sets. 6 The greedy algorithm of Schmidt and Murphy (2010); Vieira et al. 2016is more scalable: it starts with all unigram patterns and iteratively grows W by extending the ngrams that have been selected in the simpler model. At each round of training, feature selection is performed using a 1 penalty and identifies the patterns that will be further augmented.",
"cite_spans": [
{
"start": 412,
"end": 413,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variable order CRFs (VoCRFs)",
"sec_num": "2.2"
},
{
"text": "We introduce now several alternatives for learning W. Our motivation for doing so is twofold: (a) to take the internal structure of large label sets into account; (b) to identify more abstract patterns in label sequences, possibly containing gaps or iterations, which could yield smaller A [W] . As discussed below, both motivations can be combined.",
"cite_spans": [
{
"start": 290,
"end": 293,
"text": "[W]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning patterns",
"sec_num": "3"
},
{
"text": "The greedy strategy iteratively grows patterns up to order p. Considering all possible unigram and bigram patterns, we train a sparse model to select a first set of useful bigrams. In subsequent iterations, each pattern w selected at order k is extended in all possible ways to specify the pattern set at order k + 1, which will be filtered during the next training round. This approach is close, yet simpler, than the group lasso approach of Vieira et al. (2016) and experimentally yields slightly smaller pattern sets (see Table 2 ). This is because we do not enforce closure under last-character replacement: once pattern w is pruned, longer patterns ending in w are never considered. 7",
"cite_spans": [
{
"start": 443,
"end": 463,
"text": "Vieira et al. (2016)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 525,
"end": 532,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Greedy 1",
"sec_num": "3.1"
},
{
"text": "Large tag sets often occur in joint tasks, where multiple levels of information are encoded in one compound tag. For instance, the fine grain labels in the Tiger corpus (Brants et al., 2002) combine PoS and morphological information in tags such as NN.Dat.Sg.Fem for a feminine singular dative noun. In the sequel, we refer to each piece of information as a tag component. We assume that all tags contain the same components, using a \"non-applicable\" value whenever needed. Using features that test arbitrary combinations of tag components would make feature selection much more difficult, as the number of possible patterns grows combinatorially with the number of components. We keep things simple by allowing features to only evaluate one single component at a time: this allows us to identify dependencies of different orders for each component.",
"cite_spans": [
{
"start": 169,
"end": 190,
"text": "(Brants et al., 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Component-wise training",
"sec_num": "3.2"
},
{
"text": "Assuming that each tag y contains K components y = [z 1 , z 2 . . . , z K ], with z k \u2208 Y k , W is then computed as in \u00a7 3.1, except that we now consider one distinct set of patterns W k for each component k. At each training round, each set W k is extended and pruned independently from the others. Note that all these automata are trained simultaneously using a common set of features. This process results in K automata, which are intersected on the fly 8 using \"lazy\" composition. In our experiments, we also consider the case where we additionally combine the automaton representing complete tag sequences: this has the beneficial effect to restrict the combinations of subtags to values that actually exist in the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Component-wise training",
"sec_num": "3.2"
},
{
"text": "Another approach for computing W assumes that useful dependencies between tags can be identified using an auxiliary language model (LM) trained without paying any attention to observation sequences. A pattern w will then be deemed useful for the labelling task only if w is a useful history in a LM of tag sequences. This strategy was implemented by first training a compact pgram LM with entropy pruning 9 (Stolcke, 1998) and including all the surviving histories in W. In a second step, we train the complete CRF as usual, with all observation features and 1 penalty to further prune the parameter set. ",
"cite_spans": [
{
"start": 407,
"end": 422,
"text": "(Stolcke, 1998)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned language models",
"sec_num": "3.3"
},
{
"text": "Another technique, which combines the two previous ideas, relies on Maximum Entropy LMs (MELMs) (Rosenfeld, 1996) . MELMs decompose the probabililty of a sequence y 1 . . . y T using the chain rule, where each term p \u03bb (y t |y <t ) is a locally normalized exponential model including all possible ngram features up to order p:",
"cite_spans": [
{
"start": 96,
"end": 113,
"text": "(Rosenfeld, 1996)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy language models",
"sec_num": "3.4"
},
{
"text": "p(y t |y <t ; \u03bb) = Z(\u03bb) \u22121 exp \u03bb T G(y 1 . . . y t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy language models",
"sec_num": "3.4"
},
{
"text": "In contrast to globally normalized models, the complexity of training remains linear wrt. |Y|, irrespective of p. It it also straightforward both to (a) use a 1 penalty to perform feature selection; (b) include features that only test specific components of a complex tag. For an order p model, our feature functions evaluate all n-grams (for n \u2264 p) of complete tags or of one specific component:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy language models",
"sec_num": "3.4"
},
{
"text": "G w (y 1 , . . . , y t ) =I(y t\u2212n+1 . . . y t = w) G u (y 1 , . . . , y t ) =I(z k,t\u2212n+1 . . . z k,t = u)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy language models",
"sec_num": "3.4"
},
{
"text": "Once a first round of feature selection has been performed, 10 we compute A[W] as explained above. The last step of training reintroduces the observations and estimates the CRF paramaters. A variant of this approach adds extra gappy features to the n-gram features. Gappy features at order p test whether some label u occurs in the remote past anywhere between position t \u2212 p + 1 11 and t \u2212 n. They take the following form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy language models",
"sec_num": "3.4"
},
{
"text": "G w,u (y 1 , . . . , y t ) =I(y t\u2212n+1 . . . y t = w\u2227 u \u2208 {y t\u2212p+1 . . . y t\u2212n }),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy language models",
"sec_num": "3.4"
},
{
"text": "and likewise for features testing components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy language models",
"sec_num": "3.4"
},
{
"text": "The following protocol is used throughout: (a) identify W ( \u00a73) -note that this may imply to tune a regularization parameter; (b) train a full model (including tests on the observations for each pattern in W) using 1 regularization and a very small 2 term to stabilize convergence. The best regularization in (a) and (b) is selected on development data and targets either perplexity (for LMs) or label accuracy (for CRFs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training protocol",
"sec_num": "4.1"
},
{
"text": "Experiments are run on two MRLs: for Czech, we use the CoNLL 2009 data set (Haji\u010d et al., 2009) and for German, the Tiger Treebank with the split of Fraser et al. (2013) ). Both datasets include rich morphological attributes (cf. Table 1 ). All the patterns in W are combined with lexical features testing the current word x t , its prefixes and suffixes of length 1 to 4, its capitalization and the presence of digit or punctuation symbols. Additional contextual features also test words in a local window around position t. These tests greatly increase the feature count and are not provided for all label patterns: for unigram patterns, we test the presence of all unigrams and bigrams of words in a window of 5 words; for bigrams patterns we only test for all unigrams in a window of 3 words. Contextual features are not used for larger patterns.",
"cite_spans": [
{
"start": 75,
"end": 95,
"text": "(Haji\u010d et al., 2009)",
"ref_id": "BIBREF8"
},
{
"start": 149,
"end": 169,
"text": "Fraser et al. (2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 230,
"end": 237,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets and Features",
"sec_num": "4.2"
},
{
"text": "We consider several baselines: Maxent and MEMM models, neither of which considers label dependencies in training, a linear chain CRF 12 and our own implementation of the group lasso of Vieira et al. (2016) . For the latter, we contrast two setups: one where each pattern in W gives rise to one single feature, and one where it is conjoined with tests on the observation. 13 All scores in Table 2 are label accuracies on unseen test data. As expected, Maxent and MEMM are outperformed by almost all variants of CRFs, and their scores are only reported for completeness. Group lasso results demonstrate the effectiveness of using contextual information with high order features: the gain is \u2248 0.7 points for both languages and all values of p. Greedy 1 achieves accuracy results similar to group lasso, suggesting that 1 penalty alone is effective to select highorder features. It also yields slighly smaller models and very comparable training time across the board: indeed, greedy parameter selection strategies imply multiple rounds of training which are overall quite costly, due to the size of the full label set. Testing individual subtags ( \u00a7 3.2) results in a slight improvement (\u2248+0.3) in accuracy over Greedy 1 . When using an additional automata for the full tag, we get a larger gain of \u2248 0.6 points for Czech, slightly less for German: including a model for complete tags also prevents to gener-cz de p = 2 p = 3 p = 4 p = 5 p = 2 p = 3 p = 4 p = 5 90.01%",
"cite_spans": [
{
"start": 185,
"end": 205,
"text": "Vieira et al. (2016)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 388,
"end": 437,
"text": "Table 2 are label accuracies on unseen test data.",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "91 1 is described in section 3.1, Component-wise is the decomposition approach of \u00a7 3.2, PrunedLM and MELM (+Gaps) were described in \u00a7 3.3 and \u00a7 3.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "ate invalid combinations of subtags. These models represent different tradeoffs between accuracy and training time: the 4-gram Component-wise experiment only took 14 hrs to complete on German data and outperforms the corresponding Greedy 1 setup while containing approximately 100 times less features. Component-wise+Full is more comparable in size and training time to Greedy 1 , but yields a larger improvement in performance. The last sets of experiments with LMs yields even better operating points, as the first stage of pattern selection is performed with a cheap model. They are our best trade-off to date, yielding the best performance for all values of p.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In this work, we have explored ways to take advantage of the flexibility offered by implementations of VoCRFs based on finite-state techniques. We have proposed strategies to include tests on subparts of complex tags, as well as to select useful label patterns with auxiliary unconditional LMs. Experiments with two MRLs with large tagsets yielded consistent improvements (\u2248 +0.8 points) over strong baselines. They offer new perspectives to perform feature selection in high order CRFs. In our future work, we intend to also explore how to complement 1 penalties with terms penalizing more explicitely the processing time; we also wish to study how these ideas can be used in combination with neural models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "This is reminiscent of variable order HMMs, introduced eg. in(Sch\u00fctze and Singer, 1994;Ron et al., 1996).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "More precisely,Vieira et al. (2016) consider W, the closure of W under suffix and last character substitution, which factors as W = H \u00d7 Y. The complexity of training depends on the size of the finite-state automaton representing W.3 A trie has one state for each prefix.4 This was also suggested by Cotterell and Eisner (2015) as a way to build a more compact pattern automaton.5 Upon reaching a state v, we need to access the features that fire for that pattern, and also for all its suffixes. Each state thus stores a set of pattern; each pattern is associated with a set of tests on the observation (cf. 2.1).6 Recall that the size of parameter set is exponential wrt. the model order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "cf. the discussion in(Vieira et al., 2016, \u00a7 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Formally, each A[W k ] has transitions labelled with elements of Y k ; lazy intersection operates on \"generalized\" transitions, where each label z is replaced with[?, . . . , z, . . . , ?], where ? matches any symbol. A[W] is the intersection k A[W k ] and is labelled with completely specified tags.9 Starting with a full back-off n-gram language model, this approach discards n-grams if their removal causes a sufficiently small drop in cross-entropy. We used the implementation ofStolcke (2002).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As the LM building step only look at labels, we tune the regularization to optimize the perplexity of the LM on a development set.11 We use p = 6 in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Using the implementation ofLavergne et al. (2010).13 As suggested by the authors themselves in fn 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors wish to thank the reviewers for their useful comments and suggestions. This work has been partly funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 645452 (QT21).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generalized algorithms for constructing statistical language models",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "40--47",
"other_ids": {
"DOI": [
"10.3115/1075096.1075102"
]
},
"num": null,
"urls": [],
"raw_text": "Cyril Allauzen, Mehryar Mohri, and Brian Roark. 2003. Generalized algorithms for constructing sta- tistical language models. In Proceedings of the 41st Annual Meeting of the Association for Com- putational Linguistics. Association for Computa- tional Linguistics, Sapporo, Japan, pages 40-47. https://doi.org/10.3115/1075096.1075102.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "Adam",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Comput. Linguist",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy ap- proach to natural language processing. Comput. Linguist. 22(1):39-71.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The TIGER treebank",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Dipper",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Lezius",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the workshop on treebanks and linguistic theories",
"volume": "",
"issue": "",
"pages": "24--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolf- gang Lezius, and George Smith. 2002. The TIGER treebank. In Proceedings of the workshop on tree- banks and linguistic theories. pages 24-41.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Penalized expectation propagation for graphical models over strings",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell and Jason Eisner. 2015. Penalized expectation propagation for graphical mod- els over strings. In Proceedings of the 2015",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "932--942",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1094"
]
},
"num": null,
"urls": [],
"raw_text": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 932-942. https://doi.org/10.3115/v1/N15-1094.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Conditional Random Field with High-order Dependencies for Sequence Labeling and Segmentation",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Nguyen Viet Cuong",
"suffix": ""
},
{
"first": "Wee",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Hai",
"middle": [
"Leong"
],
"last": "Sun Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chieu",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "",
"pages": "981--1009",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nguyen Viet Cuong, Nan Ye, Wee Sun Lee, and Hai Leong Chieu. 2014. Conditional Ran- dom Field with High-order Dependencies for Sequence Labeling and Segmentation. Jour- nal of Machine Learning Research 15:981-1009. http://jmlr.org/papers/v15/cuong14a.html.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Knowledge sources for constituent parsing of german, a morphologically rich and less-configurational language",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Renjing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2013,
"venue": "CL",
"volume": "39",
"issue": "1",
"pages": "57--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Fraser, Helmut Schmid, Rich\u00e1rd Farkas, Renjing Wang, and Hinrich Sch\u00fctze. 2013. Knowl- edge sources for constituent parsing of german, a morphologically rich and less-configurational lan- guage. CL 39(1):57-85.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Morphological tagging: Data vs. dictionaries",
"authors": [],
"year": 2000,
"venue": "Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference",
"volume": "",
"issue": "",
"pages": "94--101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d. 2000. Morphological tagging: Data vs. dic- tionaries. In Proceedings of the 1st North American chapter of the Association for Computational Lin- guistics conference. Seattle, WA, pages 94-101.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"Ant\u00f2nia"
],
"last": "Mart\u00ed",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task. CoNLL '09",
"volume": "",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan \u0160t\u011bp\u00e1nek, Pavel Stra\u0148\u00e1k, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thir- teenth Conference on Computational Natural Lan- guage Learning: Shared Task. CoNLL '09, pages 1-18.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 18th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the 18th Interna- tional Conference on Machine Learning. Morgan Kaufmann, San Francisco, CA, Williamstown, MA, (ICML'01), pages 282-289.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Practical very large scale CRFs",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Lavergne",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Capp\u00e9",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Uppsala, Sweden",
"volume": "",
"issue": "",
"pages": "504--513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Lavergne, Olivier Capp\u00e9, and Fran\u00e7ois Yvon. 2010. Practical very large scale CRFs. In Pro- ceedings of the 48th Annual Meeting of the Associ- ation for Computational Linguistics. Uppsala, Swe- den, pages 504-513.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Structured sparsity in structured prediction",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Figueiredo",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Aguiar",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1500--1511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andre Martins, Noah Smith, Mario Figueiredo, and Pedro Aguiar. 2011. Structured sparsity in struc- tured prediction. In Proceedings of the 2011 Con- ference on Empirical Methods in Natural Language Processing. pages 1500-1511.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Joint lemmatization and morphological tagging with lemming",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "15",
"issue": "",
"pages": "2268--2274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas M\u00fcller, Ryan Cotterell, Alexander Fraser, and Hinrich Sch\u00fctze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal, EMNLP'15, pages 2268-2274.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Efficient higher-order CRFs for morphological tagging",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "15",
"issue": "",
"pages": "322--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas M\u00fcller, Helmut Schmid, and Hinrich Sch\u00fctze. 2013. Efficient higher-order CRFs for morpholog- ical tagging. In Proceedings of the 2013 Confer- ence on Empirical Methods in Natural Language Processing. Seattle, Washington, USA, EMNLP'15, pages 322-332.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Sparse forward-backward using minimum divergence beams for fast training of conditional random fields",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE International Conference on Acoustics Speech and Signal Processing Proceedings",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2006.1661342"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Pal, Charles Sutton, and Andrew McCal- lum. 2006. Sparse forward-backward using min- imum divergence beams for fast training of con- ditional random fields. In 2006 IEEE Interna- tional Conference on Acoustics Speech and Sig- nal Processing Proceedings. volume 5, pages V-V. https://doi.org/10.1109/ICASSP.2006.1661342.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The power of amnesia: Learning probabilistic automata with variable memory length",
"authors": [
{
"first": "Dana",
"middle": [],
"last": "Ron",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Tishby",
"suffix": ""
}
],
"year": 1996,
"venue": "Machine Learning",
"volume": "25",
"issue": "2-3",
"pages": "117--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dana Ron, Yoram Singer, and Naftali Tishby. 1996. The power of amnesia: Learning probabilistic au- tomata with variable memory length. Machine Learning 25(2-3):117-149.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A maximum entropy approach to adaptive statistical learning modeling",
"authors": [
{
"first": "Ronald",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 1996,
"venue": "Computer, Speech and Language",
"volume": "10",
"issue": "",
"pages": "187--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald Rosenfeld. 1996. A maximum entropy ap- proach to adaptive statistical learning modeling. Computer, Speech and Language 10:187 -228.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Convex structure learning in log-linear models: Beyond pairwise potentials",
"authors": [
{
"first": "W",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"P"
],
"last": "Schmidt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Murphy",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "709--716",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark W. Schmidt and Kevin P. Murphy. 2010. Convex structure learning in log-linear models: Beyond pair- wise potentials. In Proceedings of the Thirteenth In- ternational Conference on Artificial Intelligence and Statistics,. Chia Laguna Resort, Sardinia, Italy, AIS- TATS, pages 709-716.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Still not there? comparing traditional sequence-to-sequence models to encoderdecoder neural networks on monotone string translation tasks",
"authors": [
{
"first": "Carsten",
"middle": [],
"last": "Schnober",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Erik-L\u00e2n Do",
"middle": [],
"last": "Dinh",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee",
"volume": "",
"issue": "",
"pages": "1703--1714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carsten Schnober, Steffen Eger, Erik-L\u00e2n Do Dinh, and Iryna Gurevych. 2016. Still not there? comparing traditional sequence-to-sequence models to encoder- decoder neural networks on monotone string trans- lation tasks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 1703- 1714.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Part-ofspeech tagging using a variable memory Markov model",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "181--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze and Yoram Singer. 1994. Part-of- speech tagging using a variable memory Markov model. In Proceedings of the 32nd Annual Meet- ing of the Association for Computational Linguis- tics. Las Cruces, New Mexico, pages 181-187.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Exact decoding for jointly labeling and chunking sequences",
"authors": [
{
"first": "Nobuyuki",
"middle": [],
"last": "Shimizu",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Haas",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING/ACL",
"volume": "",
"issue": "",
"pages": "763--770",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nobuyuki Shimizu and Andrew Haas. 2006. Exact de- coding for jointly labeling and chunking sequences. In Proceedings of COLING/ACL. pages 763-770.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Context-based morphological disambiguation with random fields",
"authors": [
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Roy",
"middle": [
"W"
],
"last": "Tromble",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "475--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith, David A. Smith, and Roy W. Tromble. 2005. Context-based morphological disambiguation with random fields. In Proceedings of Human Lan- guage Technology Conference and Conference on Empirical Methods in Natural Language Process- ing. Vancouver, British Columbia, Canada, pages 475-482.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Entropy-based pruning of backoff language models",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. DARPA Broadcast News Transcription and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "270--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 1998. Entropy-based pruning of backoff language models. In Proc. DARPA Broad- cast News Transcription and Understanding Work- shop. Lansdowne, VA, pages 270-274.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "SRILM -an extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Conference on Spoken Langage Processing (ICSLP)",
"volume": "2",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -an extensible lan- guage modeling toolkit. In Proceedings of the Inter- national Conference on Spoken Langage Processing (ICSLP). Denver, CO, volume 2, pages 901-904.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "An introduction to conditional random fields for relational learning",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2006,
"venue": "Introduction to Statistical Relational Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Sutton and Andrew McCallum. 2006. An in- troduction to conditional random fields for relational learning. In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. The MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Regression shrinkage and selection via the lasso",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1996,
"venue": "Journal of the Royal Statistical Society B",
"volume": "58",
"issue": "1",
"pages": "267--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Tibshirani. 1996. Regression shrinkage and se- lection via the lasso. Journal of the Royal Statistical Society B 58(1):267-288.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A global model for joint lemmatization and part-of-speech prediction",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "486--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova and Colin Cherry. 2009. A global model for joint lemmatization and part-of-speech prediction. In Proceedings of the Joint Confer- ence of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natu- ral Language Processing of the AFNLP. Associa- tion for Computational Linguistics, pages 486-494. http://aclweb.org/anthology/P09-1055.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Speed-accuracy tradeoffs in tagging with variableorder crfs and structured sparsity",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Vieira",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1973--1978",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Vieira, Ryan Cotterell, and Jason Eisner. 2016. Speed-accuracy tradeoffs in tagging with variable- order crfs and structured sparsity. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing. EMNLP, pages 1973- 1978.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Conditional random fields with high-order features for sequence labeling",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Wee",
"middle": [
"S"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Hai",
"middle": [
"L"
],
"last": "Chieu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems 22",
"volume": "",
"issue": "",
"pages": "2196--2204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nan Ye, Wee S. Lee, Hai L. Chieu, and Dan Wu. 2009. Conditional random fields with high-order features for sequence labeling. In Y. Bengio, D. Schu- urmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22. Curran Associates, Inc., pages 2196-2204. http://papers.nips.cc/paper/3815- conditional-random-fields-with-high-order- features-for-sequence-labeling.pdf.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"html": null,
"text": "Experimental results. Each cell reports accuracy, number of states in A[W] and total training time. Group lasso is our reimplementation of Vieira et al. (2016) (+Ctx = +context features) ; Greedy",
"type_str": "table",
"content": "<table/>"
}
}
}
}