ACL-OCL / Base_JSON /prefixD /json /D10 /D10-1020.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D10-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:52:25.295051Z"
},
"title": "Crouching Dirichlet, Hidden Markov Model: Unsupervised POS Tagging with Context Local Tag Generation",
"authors": [
{
"first": "Taesun",
"middle": [],
"last": "Moon",
"suffix": "",
"affiliation": {},
"email": "tsmoon@mail.utexas.edu"
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": "",
"affiliation": {},
"email": "katrin.erk@mail.utexas.edu"
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": "",
"affiliation": {},
"email": "jbaldrid@mail.utexas.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We define the crouching Dirichlet, hidden Markov model (CDHMM), an HMM for partof-speech tagging which draws state prior distributions for each local document context. This simple modification of the HMM takes advantage of the dichotomy in natural language between content and function words. In contrast, a standard HMM draws all prior distributions once over all states and it is known to perform poorly in unsupervised and semisupervised POS tagging. This modification significantly improves unsupervised POS tagging performance across several measures on five data sets for four languages. We also show that simply using different hyperparameter values for content and function word states in a standard HMM (which we call HMM+) is surprisingly effective.",
"pdf_parse": {
"paper_id": "D10-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "We define the crouching Dirichlet, hidden Markov model (CDHMM), an HMM for partof-speech tagging which draws state prior distributions for each local document context. This simple modification of the HMM takes advantage of the dichotomy in natural language between content and function words. In contrast, a standard HMM draws all prior distributions once over all states and it is known to perform poorly in unsupervised and semisupervised POS tagging. This modification significantly improves unsupervised POS tagging performance across several measures on five data sets for four languages. We also show that simply using different hyperparameter values for content and function word states in a standard HMM (which we call HMM+) is surprisingly effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Hidden Markov Models (HMMs) are simple, versatile, and widely-used generative sequence models. They have been applied to part-of-speech (POS) tagging in supervised (Brants, 2000) , semi-supervised (Goldwater and Griffiths, 2007; Ravi and Knight, 2009) and unsupervised (Johnson, 2007) training scenarios. Though discriminative models achieve better performance in both semi-supervised (Smith and Eisner, 2005) and supervised (Toutanova et al., 2003) learning, there has been only limited work on unsupervised discriminative sequence models (e.g., on synthetic data and protein sequences (Xu et al., 2006) ), and none to POS tagging.",
"cite_spans": [
{
"start": 164,
"end": 178,
"text": "(Brants, 2000)",
"ref_id": "BIBREF5"
},
{
"start": 197,
"end": 228,
"text": "(Goldwater and Griffiths, 2007;",
"ref_id": "BIBREF12"
},
{
"start": 229,
"end": 251,
"text": "Ravi and Knight, 2009)",
"ref_id": "BIBREF24"
},
{
"start": 269,
"end": 284,
"text": "(Johnson, 2007)",
"ref_id": "BIBREF18"
},
{
"start": 385,
"end": 409,
"text": "(Smith and Eisner, 2005)",
"ref_id": "BIBREF28"
},
{
"start": 425,
"end": 449,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF31"
},
{
"start": 587,
"end": 604,
"text": "(Xu et al., 2006)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The tagging accuracy of purely unsupervised HMMs is far below that of supervised and semisupervised HMMs; this is unsurprising as it is still not well understood what kind of structure is being found by an unconstrained HMM (Headden III et al., 2008) . However, HMMs are fairly simple directed graphical models, and it is straightforward to extend them to define alternative generative processes. This also applies to linguistically motivated HMMs for recovering states and sequences that correspond more closely to those implicitly defined by linguists when they label sentences with parts-of-speech.",
"cite_spans": [
{
"start": 224,
"end": 250,
"text": "(Headden III et al., 2008)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One way in which a basic HMM's structure is a poor model for POS tagging is that there is no inherent distinction between (open-class) content words and (closed-class) function words. Here, we propose two extensions to the HMM. The first, HMM+, is a very simple modification where two different hyperparameters are posited for content states and function states, respectively. The other is the crouching Dirichlet, hidden Markov model (CDHMM), an extended HMM that captures this dichotomy based on the statistical evidence that comes from context. Content states display greater variance across local context (e.g. sentences, paragraphs, documents), and we capture this variance by adding a component to the model for content states that is based on latent Dirichlet allocation (Blei et al., 2003) . This extension is in some ways similar to the LDAHMM of Griffiths et al. (2005) . Both models are composite in that two distributions do not mix with each other. Unlike the LDAHMM, the generation of content states is folded into the CDHMM process.",
"cite_spans": [
{
"start": 778,
"end": 797,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 856,
"end": 879,
"text": "Griffiths et al. (2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We compare the HMM+ and CDHMM against a basic HMM and LDAHMM on POS tagging on a more extensive and diverse set of languages than previous work in monolingual unsupervised POS tagging: four languages from three families (Germanic: English and German; Romance: Portuguese;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "and Mayan: Uspanteko). The CDHMM easily outperforms all other models, including HMM+, across three measures (accuracy, F-score, and variation of information) for unsupervised POS tagging on most data sets. However, the HMM+ is surprisingly competitive, outperforming the basic HMM and LDAHMM, and rivaling or even passing the CDHMM on some measures and data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Bayesian formulation for a basic HMM (Goldwater and Griffiths, 2007) is:",
"cite_spans": [
{
"start": 56,
"end": 72,
"text": "Griffiths, 2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "\u03c8 t |\u03be \u223c Dir(\u03be) \u03b4 t |\u03b3 \u223c Dir(\u03b3) w i |t i = t \u223c Mult(\u03c8 t ) t i |t i\u22121 = t \u223c Mult(\u03b4 t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Dir is the conjugate Dirichlet prior to Mult (a multinomial distribution). The state transitions are generated by Mult(\u03b4 t ) whose prior \u03b4 t is generated by Dir(\u03b3) with a symmetric (i.e. uniform) hyperparameter \u03b3. Emissions are generated by Mult(\u03c8 t ) with a prior \u03c8 t generated by Dir(\u03be) with a symmetric hyperparameter \u03be. Hyperparameter values smaller than one encourage posteriors that are peaked, with smaller values increasing this concentration. It is not necessary that the hyperparameters be symmetric, but this is a common approach when one wants to be na\u00efve about the data. This is particularly appropriate in unsupervised POS tagging with regard to novel data since there won't be a priori grounds for favoring certain distributions over others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "There is considerable work on extensions to HMM-based unsupervised POS tagging (see \u00a76), but here we concentrate on the LDAHMM (Griffiths et al., 2005) , which models topics and state sequences jointly. The model is a composite of a probabilistic topic model and an HMM in which a single state is allocated for words generated from the topic model. A strength of this model is that it is able to use less supervision than previous topic models since it does not require a stopword list. While the topic model component still uses the bagsof-words assumption, the joint model infers which words are more likely to carry topical content and which words are more likely to contribute to the local sequence. This model is competitive with a standard topic model, and its output is also competitive when compared with a standard HMM. However, Griffiths et al. (2005) note that the topic model component inevitably loses some finer distinctions with respect to parts-of-speech. Though many content states such as adjectives, verbs, and nouns can vary a great deal across documents, the topic state groups these words together. This leads to assignment of word tokens to clusters that are a poorer fit for POS tagging. This paper shows that a model that conflates the LDAHMM topics with content states can significantly improve POS tagging.",
"cite_spans": [
{
"start": 127,
"end": 151,
"text": "(Griffiths et al., 2005)",
"ref_id": "BIBREF14"
},
{
"start": 838,
"end": 861,
"text": "Griffiths et al. (2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "We aim to model the fact that in many languages words can generally be grouped into function words and content words and that these groups often have significantly different distributions. There are few function words and they appear frequently, while there are many content words appearing infrequently. Another difference in distribution is often implied in information retrieval by the use of stopword filters and tf-idf values to remove or reduce the influence of words which occur frequently but have low variance (i.e. their global probability is similar to their local probability in a document).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "A difference in distribution is also revealed when the parts-of-speech are known. When no smoothing parameters are added, the joint probability of a word that is not 'the' or 'a' occurring with a DT tag (in the Penn Treebank) is almost always zero. Similarly peaked distributions are observed for other function categories such as MD and CC. On the other hand, the joint probability of any word occurring with NN is much less likely to be zero and the distribution is much less likely to be peaked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "We attempt to account for these two distributional properties-that certain words have higher variance across contexts (e.g. a document) and that certain tags have more peaked emission distributions-in a sequence model. To do this, we define the crouching Dirichlet, hidden Markov model 1 (CDHMM). This model, like LDAHMM, captures items of high variance across contexts, but it does so without losing Figure 1 : Graphical representation of relevant variables and dependencies at a given time step i. Observed word w i is dependent on hidden state t i .",
"cite_spans": [],
"ref_spans": [
{
"start": 401,
"end": 409,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "w i \u03b8 \u03b1 \u03b4 \u03b3 t i \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u03c6 \u03b2 \u03c8 \u03be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "Edges to priors \u03b8, \u03c6, \u03c8 may or may not be activated depending on the value of t i . The edge to transition prior \u03b4 is always activated. Hyperparameters to priors are represented by dots. See \u00a73.1 for details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "sequence distinctions, namely, a given word's local function via its part-of-speech. We also define the HMM+, a simple adaptation of a basic HMM which accounts for the latter property by using different priors for emissions from content and function states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "The CDHMM incorporates an LDA-like module to its graphical structure in order to capture words and tags which have high variance across contexts. Such tags correspond to content states. Like the LDAHMM, the model is composite in that distributions over a single random variable are composed of several different distribution functions which depend on the value of the underlying variable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "We posit the following model (see fig. 1 for a diagram of dependencies and all variables involved at a single time step). We observe a sequence of tokens w=(w 1 , . . . , w N ) that we assume is generated by an underlying state sequence t=(t 1 , . . . , t N ) over a state alphabet T with first order Markov dependencies. T is a union of disjoint content states C and function states F . In this composite model, the priors for the emission and transition for each step in the sequence depend on whether state t at step i is t\u2208C or t\u2208F . If t\u2208C, the word emission is dependent on \u03c6 (the content word prior) and the state transition is dependent on \u03b8 (the \"topic\" prior) and \u03b4 (the transition prior). If t\u2208F , the word emission probability is dependent on \u03c8 (the function word prior) and the state transition on \u03b4 (again, the transition prior). Therefore, if t\u2208F , the transition and emission structure is identical to the standard Bayesian HMM.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 40,
"text": "fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "To elaborate, three prior distributions are defined globally for this model: (1) \u03b4 t , the transition prior such that p(t|t, \u03b4 t ) = \u03b4t |t (2) \u03c8 t , the function word prior such that p(w|t, \u03c8 t ) = \u03c8 w|t (3) \u03c6 t , the content word prior such that p(w|t, \u03c6 t ) = \u03c6 w|t . Locally for each context d (documents in our case), we define \u03b8 d , the topic prior such that p(t|\u03b8 d ) = \u03b8 t|d for t\u2208C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "The generative story is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "1. For each state t\u2208T (a) Draw a distribution over states \u03b4 t \u223c Dir(\u03b3) (b) If t\u2208C, draw a distribution over words \u03c6 t \u223c Dir(\u03b2) (c) If t\u2208F , draw a distribution over words \u03c8 t \u223c Dir(\u03be) 2. For each context d (a) Draw a distribution \u03b8 d \u223c Dir(\u03b1) over states t\u2208C (b) For each word w i in d i. draw t i from \u03b4 t i\u22121 \u2022 \u03b8 d ii. if t i \u2208C, then draw w i from \u03c6 t i , else draw w i from \u03c8 t i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "For each context d, we draw a prior distribution \u03b8 d -formally identical to the LDA topic prior-that is defined only for the states t\u2208C. This prior is then used to weight the draws for states at each word, from \u03b4 t i\u22121 \u2022 \u03b8 d , where we have defined the vector valued operation \u2022 as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "(\u03b4 t i\u22121 \u2022 \u03b8 d ) t i = 1 Z \u03b4 t i |t i\u22121 \u2022 \u03b8 t i |d t i \u2208C 1 Z \u03b4 t i |t i\u22121 t i \u2208F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "(\u03b4 t i\u22121 \u2022 \u03b8 d ) t i is the element corresponding to state t i in the vector \u03b4 t i\u22121 \u2022 \u03b8 d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "Z is a normalization constant such that the probability mass sums to one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "p(t i |t \u2212i , w) \u221d \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 N w i |t i +\u03b2 Nt i +W \u03b2 N t i |d i +\u03b1 N d i +C\u03b1 \" N t i |t i\u22121 +\u03b3 \"\" N t i+1 |t i +I[t i\u22121 =t i =t i+1 ]+\u03b3 \" Nt i +T \u03b3+I[t i =t i\u22121 ] t i \u2208 C N w i |t i +\u03be Nt i +W \u03be \" N t i |t i\u22121 +\u03b3 \"\" N t i+1 |t i +I[t i\u22121 =t i =t i+1 ]+\u03b3 \" Nt i +T \u03b3+I[t i =t i\u22121 ] t i \u2208 F Figure 2: Conditional distribution for t i in the CDHMM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "The important thing to note is that the draw for states at each word is proportional to a composite of (a) the product of the individual elements of the topic and transition priors when t i \u2208C and (b) the transition priors when t i \u2208F . The draw is proportional to the product of topic and transition priors when t i \u2208C because we have made a product of experts (PoE) factorization assumption (Hinton, 2002) for tractability and to reduce the size of our model. Without such an assumption, the transition parameters would lie in a partitioned space of size O(|C| 4 ) as opposed to O(|T | 2 ) for the current model. Furthermore, this combination of a composite hidden state space with a product of experts assumption allows us to capture high variance for certain states.",
"cite_spans": [
{
"start": 393,
"end": 407,
"text": "(Hinton, 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "To summarize, the CDHMM is a composite model where both the observed token and the hidden state variable are composite distributions. For the hidden state, this means that there is a \"topical\" element with high variance across contexts that is embedded in the state sequence for a subset of events. We embed this element through a PoE assumption where transitions into content states are modeled as a product of the transition probability and the local probability of the content state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "Inference. We use a Gibbs sampler (Gao and Johnson, 2008) to learn the parameters of this and all other models under consideration. In this inference regime, two distributions are of particular interest. One is the posterior density and the other is the conditional distribution, neither of which can be learned in closed form.",
"cite_spans": [
{
"start": 34,
"end": 57,
"text": "(Gao and Johnson, 2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "Letting \u039b = (\u03b8, \u03b4, \u03c6, \u03c8) and h = (\u03b1, \u03b2, \u03b3, \u03be), the posterior density is given as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "p(\u039b|w, t; h) \u221d p(w, t|\u039b)p(\u039b; h) Note that p(w, t|\u039b) is equal to D d N d i \u03c6 w i |t i \u03b8 t i |d \u03b4 t i |t i\u22121 I[t i \u2208C] \u03c8 w i |t i \u03b4 t i |t i\u22121 I[t i \u2208F ] (1) where I[\u2022]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "is the indicator function, D is the number of documents in the corpus and N d is the number of tokens in document d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "Another important measure is the conditional distribution which is conditioned on all the random variables except the hidden state variable of interest and which is derived by integrating out the priors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "p(t i |t \u2212i , w; h) \u221d p(t i |t \u2212i ; h)p(w i |t, w \u2212i ; h) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "where t \u2212i is the joint random variable t without t i and w \u2212i is w without w i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "There are two well-known approaches to conducting Gibbs sampling for HMMs. The default method is to sample \u039b based on the posterior, then sample each t i based on the conditional distribution. Another approach is to sample directly from the conditional distribution without sampling from the posterior since the conditional distribution incorporates the posterior through integration. This is called a collapsed Gibbs sampler, which is the method employed for the models in this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "The full conditional distribution for tag transitions for the Gibbs sampler is given in Figure 2 . At each time step, we decrement all counts for the current value of t i , sample a new value for t i from a multinomial proportional to the conditional distribution and assign that value to t i . \u03b2, \u03be are the hyperparameters for the word emission priors of the content states and function states, respectively. \u03b3 is the hyperparameter for the state transition priors. \u03b1 is the hyperparameter for the state prior given that it is in some context d. Note that we have overridden notation so that C and T here refer to the size of the alphabet. W is the size of the vocabulary. Notation such as N t i |t i\u22121 refers to the counts of the events indicated by the subscript, minus the current token and tag under consideration. N t i |t i\u22121 is the number of times t i has occurred after t i\u22121 minus the tag for w i . N w i |t i is the number of times w i has occurred with t i minus the current value. N t i and N d i are the counts for the given tag and document minus the current value.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "In its broad outline, the CDHMM is not much more complicated than an HMM since the decomposition (eqn. 1) is nearly identical to that of an HMM with the exception that conditional probabilities for a subset of the states-the content states-are local. An inference algorithm can be derived that involves no more than adding a single term to the standard MCMC algorithm for HMMs (see Figure 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 382,
"end": 390,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "CDHMM",
"sec_num": "3.1"
},
{
"text": "The CDHMM explicitly posits two different types of states: function states and content states. Having made this distinction, there is a very simple way to capture the difference in emission distributions for function and content states within an otherwise standard HMM: posit different hyperparameters for the two types. One type has a small hyperparameter to model a sparse distribution for function words and the other has a relatively large hyperparameter to model a distribution with broader support. This extension, which we refer to as HMM+, provides an important benchmark to compare with the CDHMM to see how much is gained by its additional ability to model the fact that function words occur frequently but have low variance across contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM+",
"sec_num": "3.2"
},
{
"text": "As with the CDHMM, we use Gibbs sampling to estimate the model parameters while holding the two different hyperparameters fixed. The conditional distribution for tag transitions for this model is identical to that in fig. 2 except that it does not have the second term",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 223,
"text": "fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "HMM+",
"sec_num": "3.2"
},
{
"text": "N t i |d i +\u03b1 N d i +C\u03b1 in the first case where t i \u2208C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM+",
"sec_num": "3.2"
},
{
"text": "We are not aware of a published instance of such an extension to the HMM-which our results show to be surprisingly effective. Goldwater and Griffiths (2007) ",
"cite_spans": [
{
"start": 126,
"end": 156,
"text": "Goldwater and Griffiths (2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HMM+",
"sec_num": "3.2"
},
{
"text": "Data. We use five datasets from four languages (English, German, Portuguese, Uspanteko) for evaluating POS tagging performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experiments",
"sec_num": "4"
},
{
"text": "\u2022 English: the Brown corpus (Francis et al., 1982) and the Wall Street Journal portion of the Penn Treebank (Marcus et al., 1994 ). \u2022 German: the Tiger corpus (Brants et al., 2002) .",
"cite_spans": [
{
"start": 28,
"end": 50,
"text": "(Francis et al., 1982)",
"ref_id": "BIBREF8"
},
{
"start": 108,
"end": 128,
"text": "(Marcus et al., 1994",
"ref_id": "BIBREF20"
},
{
"start": 159,
"end": 180,
"text": "(Brants et al., 2002)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experiments",
"sec_num": "4"
},
{
"text": "\u2022 Portuguese: the full Bosque subset of the Floresta corpus (Afonso et al., 2002) . \u2022 Uspanteko (an endangered Mayan language of Guatemala): morpheme-segmented and POStagged texts collected and annotated by the OKMA language documentation project (Pixabaj et al., 2007) ; we use the cleaned-up version described in Palmer et al. (2009) . Table 2 provides the statistics for these corpora. We lowercase all words, do not remove any punctuation or hapax legomena, and we do not replace numerals with a single identifier. Due to the nature of the models, document boundaries are retained.",
"cite_spans": [
{
"start": 60,
"end": 81,
"text": "(Afonso et al., 2002)",
"ref_id": "BIBREF1"
},
{
"start": 247,
"end": 269,
"text": "(Pixabaj et al., 2007)",
"ref_id": "BIBREF23"
},
{
"start": 315,
"end": 335,
"text": "Palmer et al. (2009)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 338,
"end": 345,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data and Experiments",
"sec_num": "4"
},
{
"text": "We report values for three evaluation metrics on all five corpora, using their full tagsets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": null
},
{
"text": "\u2022 Accuracy: We use a greedy search algorithm to map each unsupervised tag to a gold label such that accuracy is maximized. We evaluate on a 1-to-1 mapping between unsupervised tags and gold labels, as well as many-to-1 (M-to-1), corresponding to the evaluation mappings used in Johnson (2007) . The 1-to-1 mapping provides a stricter evaluation. The many-to-one mapping, on the other hand, may be more adequate as unsupervised tags tend to be more fine-grained than \u2022 Pairwise Precision and Recall: Viewing tagging as a clustering task over tokens, we evaluate pairwise precision (P ) and recall (R) between the model tag sequence (M ) and gold tag sequence (G) by counting the true positives (tp), false positives (f p) and false negatives (f n) between the two and setting P = tp/(tp + f p) and R = tp/(tp + f n). tp is the number of token pairs that share a tag in M as well as in G, f p is the number token pairs that share the same tag in M but have different tags in G, and f n is the number token pairs assigned a different tag in M but the same in G (Meila, 2007) . We also provide the f -score which is the harmonic mean of P and R.",
"cite_spans": [
{
"start": 278,
"end": 292,
"text": "Johnson (2007)",
"ref_id": "BIBREF18"
},
{
"start": 1058,
"end": 1071,
"text": "(Meila, 2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": null
},
{
"text": "The variation of information is an information theoretic metric that measures the amount of information lost and gained in going from tag sequence M to G (Meila, 2007) . It is defined as",
"cite_spans": [
{
"start": 154,
"end": 167,
"text": "(Meila, 2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Variation of Information (VI):",
"sec_num": null
},
{
"text": "V I(M, G) = H(M ) + H(G) \u2212 2I(M, G)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Variation of Information (VI):",
"sec_num": null
},
{
"text": "where H denotes entropy and I mutual information. Goldwater and Griffiths (2007) noted that this measure can point out models that have more consistent errors in the form of lower VI, even when accuracy figures are the same.",
"cite_spans": [
{
"start": 50,
"end": 80,
"text": "Goldwater and Griffiths (2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Variation of Information (VI):",
"sec_num": null
},
{
"text": "We also report learning curves on M-to-1 with geometrically increasing training set sizes of 8, 16, 32, 64, 128, 256, 512, 1024 , and all documents, or as many as possible given the corpus.",
"cite_spans": [
{
"start": 93,
"end": 95,
"text": "8,",
"ref_id": null
},
{
"start": 96,
"end": 99,
"text": "16,",
"ref_id": null
},
{
"start": 100,
"end": 103,
"text": "32,",
"ref_id": null
},
{
"start": 104,
"end": 107,
"text": "64,",
"ref_id": null
},
{
"start": 108,
"end": 112,
"text": "128,",
"ref_id": null
},
{
"start": 113,
"end": 117,
"text": "256,",
"ref_id": null
},
{
"start": 118,
"end": 122,
"text": "512,",
"ref_id": null
},
{
"start": 123,
"end": 127,
"text": "1024",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Variation of Information (VI):",
"sec_num": null
},
{
"text": "In this section we discuss our parameter settings and experimental results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We compare four different models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Parameters",
"sec_num": "5.1"
},
{
"text": "\u2022 HMM: a standard HMM \u2022 HMM+: an HMM in which the hyperparameters for the word emissions are asymmetric, such that content states have different word emission priors compared to function states. \u2022 LDAHMM: an HMM with a distinguished state that generates words from a topic model (Griffiths et al., 2005) Figure 3: Averaged many-to-one accuracy on the full tagset for the models HMM+, LDAHMM, CDHMM when the number of states is set at 20, 30, 40 and 50 states.",
"cite_spans": [
{
"start": 279,
"end": 303,
"text": "(Griffiths et al., 2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Parameters",
"sec_num": "5.1"
},
{
"text": "\u2022 CDHMM: our HMM with context-based emissions, where the context used is the document",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Parameters",
"sec_num": "5.1"
},
{
"text": "We implemented all of these models, ensuring performance differences are due to the models themselves rather than implementation details. For all models, the transition hyperparameters \u03b3 are set to 0.1. For the LDAHMM and HMM all emission hyperparameters are set to 0.0001. These figures are the MCMC settings that provided the best results in Johnson (2007) . For the models that distinguish content and function states (HMM+, CDHMM), we fixed the number of content states at 5 and set the function state emission hyperparameters \u03be = 0.0001 and the content state emission hyperparameters \u03b2 = 0.1. For the models with an LDA or LDA-like component (LDAHMM, CDHMM), we set the topic or content-state hyperparameter \u03b1 = 1.",
"cite_spans": [
{
"start": 344,
"end": 358,
"text": "Johnson (2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Parameters",
"sec_num": "5.1"
},
{
"text": "For decoding, we use maximum posterior decoding to obtain a single sample after the required burnin, as has been done in other unsupervised HMM experiments. We use this sample for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Parameters",
"sec_num": "5.1"
},
{
"text": "Results for all models on the full tagset are provided in table 1. 2 Each number is the mean accuracy of ten randomly initialized samples after a single chain burn-in of 1000 iterations. The model with a statistically significant (p < 0.05) best score for each measure and data set is given in plain bold. In cases where the differences for the best models are not significantly different from each other, but are significantly better from the others, the top model scores are given in bold italic. CDHMM is extremely strong on the accuracy metric: it wins or ties for all datasets for both 1-to-1 and M-to-1 measures. For pairwise f -score, it obtains the best score for two datasets (WSJ and Tiger), and ties with HMM+ on Brown (we return to Uspanteko and Floresta below in an experiment that varies the number of states). For VI, HMM+ and CDHMM both easily outperform the other models, with CDHMM winning Brown and Uspanteko and HMM+ winning Floresta.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "In the case of Uspanteko, the absolute difference in mean performance between models is smaller overall but still significant. This is due to the reduced variance between samples for all models. This is striking because the non-CDHMM models have much higher standard deviation on other corpora but have sharply reduced standard deviation only for Uspanteko. The most likely explanation is that the Uspanteko corpus is much smaller than the other corpora. 3 Nonetheless, CDHMM comes out strongest on most measures.",
"cite_spans": [
{
"start": 455,
"end": 456,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "A simple baseline for accuracy is to choose the most frequent tag for all tokens; this gives accuracies of 0.14 (WSJ), 0.14 (Brown), 0.21 (Tiger), 0.20 (Floresta), and 0.11 (Uspanteko). Clearly, all of the models easily outperform this baseline. Figure 3 shows the change in accuracy for the different models for different corpora when the overall number of states is varied between 20 and 50. The figure shows results for M-to-1. All models with the exception of HMM+ show improvements as the number of states is increased. This brings up the valid concern (Clark, 2003; Johnson, 2007) that a model could posit a very large number of states and obtain high M-to-1 scores. However, it is neither the case here nor in any of the studies we cite. Furthermore, as is strongly suggested with HMM+, it does not seem as if all models will benefit from assuming a large number of states. Looking at the results by number of states on VI and f -score for CDHMM( Figure 5 ), it is clear that Floresta displays the reverse pattern of all other data sets where performance monotonically deteriorates as state sizes are increased. Though the exact reason is unknown, we believe it is partially due to the fact that Floresta has 19 tags. We therefore wondered whether positing a state size that more closely approximated the size of the gold tag set performs better. Since the discrepancy is greatest for Uspanteko and Floresta, we present tabulated results for experiments with state settings of 100 and 20 states respectively (table 3) . With the exception of VI (where lower is better) for Uspanteko, the scores generally improve when the model state size is closer to the gold size. M-to-1 goes down for Floresta when 20 states are posited, but this is to be expected since this score is defined, to a certain extent, to do better with Variance. As we average performance figures over ten runs for each model, it is also instructive to consider standard deviation across runs. Standard deviation is lowest for the CDHMM models and the vanilla HMM. Standard deviation is high for HMM+ and LDAHMM. This is not surprising for LDAHMM, since it has fifty topic parameters in addition to the number of states posited, and random initial conditions would have greater effect on the outcome than for the other models. It is unexpected, however, that HMM+ has high variance over different chains. The model shares the large content emission hyperparameter \u03b2 = 0.1 with CDHMM. At this point, it can only be assumed that the additional LDA component acts as a regularization factor for CDHMM and reduced the volatility in having a large emission hyperparameter. Learning curves We present learning curves on different sizes of subcorpora in Figure 4 . The graphs are box plots of the full M-1 accuracy figures on 10 randomly initialized training runs for seven subcorpora in Brown, nine in WSJ, Tiger, Floresta and three in Uspanteko.",
"cite_spans": [
{
"start": 558,
"end": 571,
"text": "(Clark, 2003;",
"ref_id": "BIBREF6"
},
{
"start": 572,
"end": 586,
"text": "Johnson, 2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 246,
"end": 254,
"text": "Figure 3",
"ref_id": null
},
{
"start": 954,
"end": 962,
"text": "Figure 5",
"ref_id": "FIGREF1"
},
{
"start": 1515,
"end": 1524,
"text": "(table 3)",
"ref_id": "TABREF5"
},
{
"start": 2721,
"end": 2729,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Comparing the graphs, the performance of HMM+ shows the strongest improvement for English and German data as the amount of training data increases. Also, it is evident that CDHMM posts consistent performance gains across data sets as it trains on more data. This stands in opposition to HMM and LDAHMM which do not seem able to take advantage of more information for WSJ and Floresta. This suggests that performance for CDHMM and HMM+ could improve if the training corpora were augmented with out-of-corpus raw data. One exception to the consistent improvement over increased data is the performance of the models on Uspanteko, which uniformly flatline. One reason might be that the tags are labeled over segmented morphemes instead of words like the other corpora. Another could be that Uspanteko has a relatively large number of tags in a very small corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of states.",
"sec_num": null
},
{
"text": "Unsupervised POS tagging is an active area of research. Most recent work has involved HMMs. Given that an unconstrained HMM is not well understood in POS tagging, much work has been done on examining the mechanism and the properties of the HMM as applied to natural language data (Johnson, 2007; Gao and Johnson, 2008; Headden III et al., 2008) . Conversely, there has also been work focused on improving the HMM as an inference procedure that looked at POS tagging as an example (Graca et al., 2009; Liang and Klein, 2009) . Nonparametric HMMs for unsupervised POS tag induction (Snyder et al., 2008; Van Gael et al., 2009) have seen particular activity due to the fact that model size assumptions are unnecessary and it lets the data \"speak for itself.\"",
"cite_spans": [
{
"start": 280,
"end": 295,
"text": "(Johnson, 2007;",
"ref_id": "BIBREF18"
},
{
"start": 296,
"end": 318,
"text": "Gao and Johnson, 2008;",
"ref_id": "BIBREF10"
},
{
"start": 319,
"end": 344,
"text": "Headden III et al., 2008)",
"ref_id": "BIBREF16"
},
{
"start": 480,
"end": 500,
"text": "(Graca et al., 2009;",
"ref_id": "BIBREF13"
},
{
"start": 501,
"end": 523,
"text": "Liang and Klein, 2009)",
"ref_id": "BIBREF19"
},
{
"start": 580,
"end": 601,
"text": "(Snyder et al., 2008;",
"ref_id": "BIBREF29"
},
{
"start": 602,
"end": 624,
"text": "Van Gael et al., 2009)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "There is also work on alternative unsupervised models that are not HMMs (Sch\u00fctze, 1993; Abend et al., 2010; Reichart et al., 2010b) as well as research on improving evaluation of unsupervised taggers (Frank et al., 2009; Reichart et al., 2010a) .",
"cite_spans": [
{
"start": 72,
"end": 87,
"text": "(Sch\u00fctze, 1993;",
"ref_id": "BIBREF27"
},
{
"start": 88,
"end": 107,
"text": "Abend et al., 2010;",
"ref_id": "BIBREF0"
},
{
"start": 108,
"end": 131,
"text": "Reichart et al., 2010b)",
"ref_id": "BIBREF26"
},
{
"start": 200,
"end": 220,
"text": "(Frank et al., 2009;",
"ref_id": "BIBREF9"
},
{
"start": 221,
"end": 244,
"text": "Reichart et al., 2010a)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "Though they did not concentrate on unsupervised methods, Haghighi and Klein (2006) conducted an unsupervised experiment that utilized certain token features (e.g. character suffixes of 3 or less, has initial capital, etc.; the features themselves are from Smith and Eisner (2005) ) to learn parameters in an undirected graphical model which was the equivalent of an HMM in directed models. It was also the first study to posit the one-to-one evaluation criterion which has been repeated extensively since (Johnson, 2007; Headden III et al., 2008; Graca et al., 2009) . Finkel et al. (2007) is an interesting variant of unsupervised POS tagging where a parse tree is assumed and POS tags are induced from this structure non-parametrically. It is the converse of unsupervised parsing which assumes access to a tagged corpus and induces a parsing model.",
"cite_spans": [
{
"start": 57,
"end": 82,
"text": "Haghighi and Klein (2006)",
"ref_id": "BIBREF15"
},
{
"start": 256,
"end": 279,
"text": "Smith and Eisner (2005)",
"ref_id": "BIBREF28"
},
{
"start": 505,
"end": 520,
"text": "(Johnson, 2007;",
"ref_id": "BIBREF18"
},
{
"start": 521,
"end": 546,
"text": "Headden III et al., 2008;",
"ref_id": "BIBREF16"
},
{
"start": 547,
"end": 566,
"text": "Graca et al., 2009)",
"ref_id": "BIBREF13"
},
{
"start": 569,
"end": 589,
"text": "Finkel et al. (2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "Other models more directly influenced or closely parallel our work. Griffiths et al. (2005) is the work that inspired the current approach where a set of states is designated to capture variance across contexts. The primary goal of that model was to induce a topic model given data that had not been filtered of noise in the form of function words. As such, distinguishing between topic states such that they model different syntactic states was not attempted, and we have seen in sec. 3 that such an extension is not entirely straightforward. 4 Boyd-Graber and Blei (2009) has some parallels to our model in that a hidden variable over topics is distributed according to a normalized product between a context prior and a syntactic prior. However, it assumes a much greater amount of information than we do in that a parse tree as well as (possibly) POS tags are taken as observed. The model has a very different goal from ours as well, which is to infer a syntactically informed topic model. Teichert and Daum\u00e9 III (2010) is another study with close similarities to our own. This study models distinctions between closed class words and open class words within a modified HMM. It is unclear from their formulation how the distinction between open class and closed class words is learned.",
"cite_spans": [
{
"start": 68,
"end": 91,
"text": "Griffiths et al. (2005)",
"ref_id": "BIBREF14"
},
{
"start": 562,
"end": 573,
"text": "Blei (2009)",
"ref_id": "BIBREF3"
},
{
"start": 994,
"end": 1023,
"text": "Teichert and Daum\u00e9 III (2010)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "There is also extensive literature on learning sequence structure from unlabeled text (Smith and Eisner, 2005; Goldberg et al., 2008; Ravi and Knight, 2009) which assume access to a tag dictionary. Goldwater and Griffiths (2007) deserves mention for examining a semi-supervised model 4 We tested a variant of LDAHMM in which more than one state can generate topics. It did not achieve good results. that sampled emission hyperparameters for each state rather than a single symmetric hyperparameter. They showed that this outperformed a symmetric model. An interesting heuristic model is Zhao and Marcus (2009) that uses a seed set of closed class words to classify open class words.",
"cite_spans": [
{
"start": 86,
"end": 110,
"text": "(Smith and Eisner, 2005;",
"ref_id": "BIBREF28"
},
{
"start": 111,
"end": 133,
"text": "Goldberg et al., 2008;",
"ref_id": "BIBREF11"
},
{
"start": 134,
"end": 156,
"text": "Ravi and Knight, 2009)",
"ref_id": "BIBREF24"
},
{
"start": 198,
"end": 228,
"text": "Goldwater and Griffiths (2007)",
"ref_id": "BIBREF12"
},
{
"start": 284,
"end": 285,
"text": "4",
"ref_id": null
},
{
"start": 587,
"end": 609,
"text": "Zhao and Marcus (2009)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "We have shown that a hidden Markov model that allocates a subset of the states to have distributions conditioned on localized domains can significantly improve performance in unsupervised partof-speech tagging. We have also demonstrated that significant performance gains are possible simply by setting a different emission hyperparameter for a subgroup of the states. It is encouraging that these results hold for both models not just on the WSJ but across a diverse set of languages and measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We believe our proposed extensions to the HMM are a significant contribution to the general HMM and unsupervised POS tagging literature in that both can be implemented with minimum modification of existing MCMC inferred HMMs, have (nearly) equivalent run times, produce output that is easy to interpret since they are based on a generative framework, and bring about considerable performance improvements at the same time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We call our model a \"crouching Dirichlet\" model since it involves a Dirichlet prior that generates distributions for certain states as if it were \"crouching\" on the side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Similar results are obtained with reduced tagsets, as is commonly done in other work on unsupervised POS-tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "which is interesting in itself since the weak law of large numbers implies that sample standard deviation decreases with sample size, which in our case is the number of tokens rather than the 10 samples under discussion",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank Elias Ponvert and the anonymous reviewers. This work was supported by a grant from the Morris Memorial Trust Fund of the New York Community Trust.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Improved unsupervised POS induction through prototype discovery",
"authors": [
{
"first": "O",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1298--1307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O. Abend, R. Reichart, and A. Rappoport. 2010. Im- proved unsupervised POS induction through prototype discovery. In Proceedings of ACL, pages 1298-1307.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Floresta sint\u00e1(c)tica\": a treebank for Portuguese",
"authors": [
{
"first": "S",
"middle": [],
"last": "Afonso",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bick",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Haber",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Santos",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "1698--1703",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Afonso, E. Bick, R. Haber, and D. Santos. 2002. Flo- resta sint\u00e1(c)tica\": a treebank for Portuguese. In Pro- ceedings of LREC, pages 1698-1703.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "The Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. M. Blei, A. Y. Ng, and M. I. Jordan. 2003. Latent Dirichlet allocation. The Journal of Machine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Syntactic topic models",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "185--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. L. Boyd-Graber and D. Blei. 2009. Syntactic topic models. In Proceedings of NIPS, pages 185-192.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The TIGER treebank",
"authors": [
{
"first": "S",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dipper",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Lezius",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Workshop on Treebanks and Linguistic Theories",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Brants, S. Dipper, S. Hansen, W. Lezius, and G. Smith. 2002. The TIGER treebank. In Proceedings of the Workshop on Treebanks and Linguistic Theories.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "TnT: a statistical part-of-speech tagger",
"authors": [
{
"first": "T",
"middle": [],
"last": "Brants",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of conference on Applied natural language processing",
"volume": "",
"issue": "",
"pages": "224--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Brants. 2000. TnT: a statistical part-of-speech tag- ger. In Proceedings of conference on Applied natural language processing, pages 224-231.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Combining distributional and morphological information for part of speech induction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "59--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Clark. 2003. Combining distributional and morpho- logical information for part of speech induction. In Proceedings of EACL, pages 59-66.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The infinite tree",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "272--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Finkel, T. Grenager, and C. D. Manning. 2007. The infinite tree. In Proceedings of ACL, pages 272-279.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Frequency analysis of English usage: Lexicon and grammar",
"authors": [
{
"first": "W",
"middle": [
"N"
],
"last": "Francis",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ku\u010dera",
"suffix": ""
},
{
"first": "A",
"middle": [
"W"
],
"last": "Mackie",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W.N. Francis, H. Ku\u010dera, and A.W. Mackie. 1982. Fre- quency analysis of English usage: Lexicon and gram- mar. Houghton Mifflin Harcourt.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Evaluating models of syntactic category acquisition without using a gold standard",
"authors": [
{
"first": "S",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CogSci",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Frank, S. Goldwater, and F. Keller. 2009. Evaluating models of syntactic category acquisition without using a gold standard. In Proceedings of CogSci.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A comparison of Bayesian estimators for unsupervised Hidden Markov Model POS taggers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "344--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Gao and M. Johnson. 2008. A comparison of Bayesian estimators for unsupervised Hidden Markov Model POS taggers. In Proceedings of EMNLP, pages 344- 352.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "EM can find pretty good HMM POS-taggers (when given a good start)",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Adler",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "746--754",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Goldberg, M. Adler, and M. Elhadad. 2008. EM can find pretty good HMM POS-taggers (when given a good start). In Proceedings of ACL, pages 746-754.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A fully Bayesian approach to unsupervised part-of-speech tagging",
"authors": [
{
"first": "S",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "T",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "744--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Goldwater and T. L. Griffiths. 2007. A fully Bayesian approach to unsupervised part-of-speech tagging. In Proceedings of ACL, pages 744-751.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Posterior vs parameter sparsity in latent variable models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Graca",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "664--672",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Graca, K. Ganchev, B. Taskar, and F. Pereira. 2009. Posterior vs parameter sparsity in latent variable mod- els. In Proceedings of NIPS, pages 664-672.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Integrating topics and syntax",
"authors": [
{
"first": "T",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "537--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. L. Griffiths, M. Steyvers, D. M. Blei, and J. M. Tenen- baum. 2005. Integrating topics and syntax. In Pro- ceedings of NIPS, pages 537-544.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Prototype-driven learning for sequence models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT/NAACL",
"volume": "",
"issue": "",
"pages": "320--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Haghighi and D. Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of HLT/NAACL, pages 320-327.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Evaluating unsupervised part-of-speech tagging for grammar induction",
"authors": [
{
"first": "W",
"middle": [
"P"
],
"last": "Headden",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "329--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. P. Headden III, D. McClosky, and E. Charniak. 2008. Evaluating unsupervised part-of-speech tagging for grammar induction. In Proceedings of COLING, pages 329-336.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Training products of experts by minimizing contrastive divergence",
"authors": [
{
"first": "G",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2002,
"venue": "Neural Computation",
"volume": "14",
"issue": "8",
"pages": "1771--1800",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G.E. Hinton. 2002. Training products of experts by min- imizing contrastive divergence. Neural Computation, 14(8):1771-1800.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Why doesn't EM find good HMM POS-taggers",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "296--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Johnson. 2007. Why doesn't EM find good HMM POS-taggers. In Proceedings of EMNLP-CoNLL, pages 296-305.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Online EM for unsupervised models",
"authors": [
{
"first": "P",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of HLT/NAACL",
"volume": "",
"issue": "",
"pages": "611--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Liang and D. Klein. 2009. Online EM for unsuper- vised models. In Proceedings of HLT/NAACL, pages 611-619.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1994,
"venue": "Comp. ling",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.P. Marcus, B. Santorini, and M.A. Marcinkiewicz. 1994. Building a large annotated corpus of English: The Penn Treebank. Comp. ling., 19(2):313-330.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Comparing clusterings-an information based distance",
"authors": [
{
"first": "M",
"middle": [],
"last": "Meila",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Multivariate Analysis",
"volume": "98",
"issue": "5",
"pages": "873--895",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Meila. 2007. Comparing clusterings-an informa- tion based distance. Journal of Multivariate Analysis, 98(5):873-895.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Evaluating automation strategies in language documentation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the NAACL-HLT 2009 Workshop on Active Learning for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "36--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Palmer, T. Moon, and J. Baldridge. 2009. Evaluat- ing automation strategies in language documentation. In Proceedings of the NAACL-HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 36-44.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Text Collections in Four Mayan Languages",
"authors": [
{
"first": "T",
"middle": [
"C"
],
"last": "Pixabaj",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Vicente M\u00e9ndez",
"suffix": ""
},
{
"first": "M",
"middle": [
"Vicente"
],
"last": "M\u00e9ndez",
"suffix": ""
},
{
"first": "O",
"middle": [
"A"
],
"last": "Dami\u00e1n",
"suffix": ""
}
],
"year": 2007,
"venue": "The Archive of the Indigenous Languages of Latin America",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. C. Pixabaj, M. A. Vicente M\u00e9ndez, M. Vicente M\u00e9ndez, and O. A. Dami\u00e1n. 2007. Text Collections in Four Mayan Languages. Archived in The Archive of the Indigenous Languages of Latin America.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Minimized models for unsupervised part-of-speech tagging",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL and AFNLP",
"volume": "",
"issue": "",
"pages": "504--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Ravi and K. Knight. 2009. Minimized models for unsupervised part-of-speech tagging. In Proceedings of ACL and AFNLP, pages 504-512.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Type level clustering evaluation: New measures and a POS induction case study",
"authors": [
{
"first": "R",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "77--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Reichart, O. Abend, and A. Rappoport. 2010a. Type level clustering evaluation: New measures and a POS induction case study. In Proceedings of CoNLL, pages 77-87.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Improved unsupervised POS induction using intrinsic clustering quality and a Zipfian constraint",
"authors": [
{
"first": "R",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Fattal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "57--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Reichart, R. Fattal, and A. Rappoport. 2010b. Im- proved unsupervised POS induction using intrinsic clustering quality and a Zipfian constraint. In Proceed- ings of CoNLL, pages 57-66.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Part-of-speech induction from scratch",
"authors": [
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "251--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Sch\u00fctze. 1993. Part-of-speech induction from scratch. In Proceedings of ACL, pages 251-258.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Contrastive estimation: Training log-linear models on unlabeled data",
"authors": [
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "354--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N.A. Smith and J. Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Pro- ceedings of ACL, pages 354-362.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Unsupervised multilingual learning for POS tagging",
"authors": [
{
"first": "B",
"middle": [],
"last": "Snyder",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1041--1050",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Snyder, T. Naseem, J. Eisenstein, and R. Barzilay. 2008. Unsupervised multilingual learning for POS tagging. In Proceedings of EMNLP, pages 1041- 1050.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Unsupervised Part of Speech Tagging Without a Lexicon",
"authors": [
{
"first": "A",
"middle": [
"R"
],
"last": "Teichert",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "NIPS Workshop on Grammar Induction, Representation of Language and Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.R. Teichert and H. Daum\u00e9 III. 2010. Unsupervised Part of Speech Tagging Without a Lexicon. In NIPS Workshop on Grammar Induction, Representation of Language and Language Learning 2010.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Feature-rich part-of-speech tagging with a cyclic dependency network",
"authors": [
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Toutanova, D. Klein, C. Manning, and Y. Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of NAACL, pages 173-180.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The infinite HMM for unsupervised PoS tagging",
"authors": [
{
"first": "J",
"middle": [],
"last": "Van Gael",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "678--687",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Van Gael, A. Vlachos, and Z. Ghahramani. 2009. The infinite HMM for unsupervised PoS tagging. In Pro- ceedings of EMNLP, pages 678-687.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Discriminative unsupervised learning of structured predictors",
"authors": [
{
"first": "L",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wilkinson",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Southey",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Schuurmans",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "1057--1064",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Xu, D. Wilkinson, F. Southey, and D. Schuurmans. 2006. Discriminative unsupervised learning of struc- tured predictors. In Proceedings of ICML, pages 1057-1064.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A simple unsupervised learner for POS disambiguation rules given only a minimal lexicon",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "688--697",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Q. Zhao and M. Marcus. 2009. A simple unsuper- vised learner for POS disambiguation rules given only a minimal lexicon. In Proceedings of EMNLP, pages 688-697.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "f -score and VI for CDHMM by number of states larger models.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Learning curves on M-to-1 evaluation. The staples at each point represent two standard deviations.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"num": null,
"content": "<table><tr><td>corpus</td><td colspan=\"4\">tokens docs avg. tags</td></tr><tr><td colspan=\"3\">WSJ 974254 1801</td><td>541</td><td>43</td></tr><tr><td colspan=\"2\">Brown 797328</td><td colspan=\"2\">343 2325</td><td>80</td></tr><tr><td colspan=\"3\">Tiger 447079 1090</td><td>410</td><td>58</td></tr><tr><td colspan=\"3\">Floresta 197422 1956</td><td>101</td><td>19</td></tr><tr><td>Uspanteko</td><td>70125</td><td colspan=\"2\">29 2418</td><td>83</td></tr></table>",
"text": "posits different hyperparameters for individual states, but not for different groups of states.",
"type_str": "table",
"html": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"text": "Number of tokens, documents, average tokens per document and total tag types for each corpus.",
"type_str": "table",
"html": null
},
"TABREF3": {
"num": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"html": null
},
"TABREF5": {
"num": null,
"content": "<table/>",
"text": "Evaluation for Uspanteko and Floresta. Experiments in this table use state sizes that correspond more closely to the size of the tag sets in the respective corpora.",
"type_str": "table",
"html": null
}
}
}
}