| { |
| "paper_id": "Q16-1018", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:06:21.834444Z" |
| }, |
| "title": "Unsupervised Part-Of-Speech Tagging with Anchor Hidden Markov Models", |
| "authors": [ |
| { |
| "first": "Karl", |
| "middle": [], |
| "last": "Stratos", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Columbia University", |
| "location": {} |
| }, |
| "email": "stratos@cs.columbia.edu" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Columbia University", |
| "location": {} |
| }, |
| "email": "mcollins@cs.columbia.edu" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Columbia University", |
| "location": {} |
| }, |
| "email": "djhsu@cs.columbia.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We tackle unsupervised part-of-speech (POS) tagging by learning hidden Markov models (HMMs) that are particularly well-suited for the problem. These HMMs, which we call anchor HMMs, assume that each tag is associated with at least one word that can have no other tag, which is a relatively benign condition for POS tagging (e.g., \"the\" is a word that appears only under the determiner tag). We exploit this assumption and extend the non-negative matrix factorization framework of Arora et al. (2013) to design a consistent estimator for anchor HMMs. In experiments, our algorithm is competitive with strong baselines such as the clustering method of Brown et al. (1992) and the log-linear model of Berg-Kirkpatrick et al. (2010). Furthermore, it produces an interpretable model in which hidden states are automatically lexicalized by words.", |
| "pdf_parse": { |
| "paper_id": "Q16-1018", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We tackle unsupervised part-of-speech (POS) tagging by learning hidden Markov models (HMMs) that are particularly well-suited for the problem. These HMMs, which we call anchor HMMs, assume that each tag is associated with at least one word that can have no other tag, which is a relatively benign condition for POS tagging (e.g., \"the\" is a word that appears only under the determiner tag). We exploit this assumption and extend the non-negative matrix factorization framework of Arora et al. (2013) to design a consistent estimator for anchor HMMs. In experiments, our algorithm is competitive with strong baselines such as the clustering method of Brown et al. (1992) and the log-linear model of Berg-Kirkpatrick et al. (2010). Furthermore, it produces an interpretable model in which hidden states are automatically lexicalized by words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Part-of-speech (POS) tagging without supervision is a quintessential problem in unsupervised learning for natural language processing (NLP). A major application of this task is reducing annotation cost: for instance, it can be used to produce rough syntactic annotations for a new language that has no labeled data, which can be subsequently refined by human annotators.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Hidden Markov models (HMMs) are a natural choice of model and have been a workhorse for this problem. Early works estimated vanilla HMMs * Currently on leave at Google Inc. New York.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "with standard unsupervised learning methods such as the expectation-maximization (EM) algorithm, but it quickly became clear that they performed very poorly in inducing POS tags (Merialdo, 1994) . Later works improved upon vanilla HMMs by incorporating specific structures that are well-suited for the task, such as a sparse prior (Johnson, 2007) or a hard-clustering assumption (Brown et al., 1992) .", |
| "cite_spans": [ |
| { |
| "start": 178, |
| "end": 194, |
| "text": "(Merialdo, 1994)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 331, |
| "end": 346, |
| "text": "(Johnson, 2007)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 379, |
| "end": 399, |
| "text": "(Brown et al., 1992)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work, we tackle unsupervised POS tagging with HMMs whose structure is deliberately suitable for POS tagging. These HMMs impose an assumption that each hidden state is associated with an observation state (\"anchor word\") that can appear under no other state. For this reason, we denote this class of restricted HMMs by anchor HMMs. Such an assumption is relatively benign for POS tagging; it is reasonable to assume that each POS tag has at least one word that occurs only under that tag. For example, in English, \"the\" is an anchor word for the determiner tag; \"laughed\" is an anchor word for the verb tag.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We build on the non-negative matrix factorization (NMF) framework of Arora et al. (2013) to derive a consistent estimator for anchor HMMs. We make several new contributions in the process. First, to our knowledge, there is no previous work directly building on this framework to address unsupervised sequence labeling. Second, we generalize the NMF-based learning algorithm to obtain extensions that are important for empirical performance (Table 1) . Third, we perform extensive experiments on unsupervised POS tagging and report competitive results against strong baselines such as the clustering method of Brown et al. (1992) and the log-linear model of Berg-Kirkpatrick et al. (2010) .", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 88, |
| "text": "Arora et al. (2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 609, |
| "end": 628, |
| "text": "Brown et al. (1992)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 657, |
| "end": 687, |
| "text": "Berg-Kirkpatrick et al. (2010)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 440, |
| "end": 449, |
| "text": "(Table 1)", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "One characteristic of the approach is the immediate interpretability of inferred hidden states. Because each hidden state is associated with an observation, we can examine the set of such anchor observations to qualitatively evaluate the learned model. In our experiments on POS tagging, we find that anchor observations correspond to possible POS tags across different languages (Table 3) . This property can be useful when we wish to develop a tagger for a new language that has no labeled data; we can label only the anchor words to achieve a complete labeling of the data.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 380, |
| "end": 389, |
| "text": "(Table 3)", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper is structured as follows. In Section 2, we establish the notation we use throughout. In Section 3, we define the model family of anchor HMMs. In Section 4, we derive a matrix decomposition algorithm for estimating the parameters of an anchor HMM. In Section 5, we present our experiments on unsupervised POS tagging. In Section 6, we discuss related works.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We use [n] to denote the set of integers {1, . . . , n}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We use E[X] to denote the expected value of a random variable X. We define \u2206 m\u22121 :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "= {v \u2208 R m : v i \u2265 0 \u2200i, i v i = 1}, i.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "e., the (m\u22121)-dimensional probability simplex. Given a vector v \u2208 R m , we use diag(v) \u2208 R m\u00d7m to denote the diagonal matrix with v 1 . . . v m on the main diagonal. Given a matrix M \u2208 R n\u00d7m , we write M i \u2208 R m to denote the i-th row of M (as a column vector).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Definition 3.1. An anchor HMM (A-HMM) is a 6tuple (n, m, \u03c0, t, o, A) for positive integers n, m and functions \u03c0, t, o, A where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 [n]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "is a set of observation states.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 [m] is a set of hidden states.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 \u03c0(h) is the probability of generating h \u2208 [m] in the first position of a sequence.", |
| "cite_spans": [ |
| { |
| "start": 44, |
| "end": 47, |
| "text": "[m]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 t(h |h) is the probability of generating h \u2208", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "[m] given h \u2208 [m].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 o(x|h) is the probability of generating", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "x \u2208 [n] given h \u2208 [m]. \u2022 A(h) := {x \u2208 [n] : o(x|h) > 0 \u2227 o(x|h ) = 0 \u2200h = h} is non-empty for each h \u2208 [m].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In other words, an A-HMM is an HMM in which each hidden state h is associated with at least one \"anchor\" observation state that can be generated by, and only by, h. Note that the anchor condition implies n \u2265 m.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "An equivalent definition of an A-HMM is given by organizing the parameters in matrix form. Under this definition, an A-HMM has parameters (\u03c0, T, O) where \u03c0 \u2208 R m is a vector and T \u2208 R m\u00d7m , O \u2208 R n\u00d7m are matrices whose entries are set to:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 \u03c0 h = \u03c0(h) for h \u2208 [m] \u2022 T h ,h = t(h |h) for h, h \u2208 [m] \u2022 O x,h = o(x|h) for h \u2208 [m], x \u2208 [n]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The anchor condition implies that rank(O) = m. To see this, consider the rows O a 1 . . . O am where a h \u2208 A(h). Since each O a h has a single non-zero entry at the h-th index, the columns of O are linearly independent. We assume rank(T ) = m.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "An important special case of A-HMM introduced by Brown et al. (1992) is defined below. Under this A-HMM, every observation state is an anchor of some hidden state. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Anchor Hidden Markov Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We now derive an algorithm for learning A-HMMs. The algorithm reduces the learning problem to an instance of NMF from which the model parameters can be computed in closed-form.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter Estimation for A-HMMs", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We give a brief review of the NMF method of Arora et al. (2013) . (Exact) NMF is the following problem: given an n \u00d7 d matrix A = BC where B \u2208 R n\u00d7m and C \u2208 R m\u00d7d have non-negativity constraints, recover B and C. This problem is NP-hard in general (Vavasis, 2009) , but Arora et al. (2013) provide an exact and efficient method when A has the following special structure: Condition 4.1. A matrix A \u2208 R n\u00d7d satisfies this condition if A = BC for B \u2208 R n\u00d7m and C \u2208 R m\u00d7d where Anchor-NMF Input: A \u2208 R n\u00d7d satisfying Condition 4.1 with A = BC for some B \u2208 R n\u00d7m and C \u2208 R m\u00d7d , value m 3. rank(C) = m.", |
| "cite_spans": [ |
| { |
| "start": 44, |
| "end": 63, |
| "text": "Arora et al. (2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 248, |
| "end": 263, |
| "text": "(Vavasis, 2009)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 270, |
| "end": 289, |
| "text": "Arora et al. (2013)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NMF", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 For h = 1 . . . m, find a vertex a h as U \u2190 Gram-Schmidt({A a l } h\u22121 l=1 ) a h \u2190 arg max x\u2208[n] A x \u2212 U U A x 2 where Gram-Schmidt({A a l } h\u22121 l=1 ) is the Gram- Schmidt process that orthonormalizes {A a l } h\u22121 l=1 . \u2022 For x = 1 . . . n, recover the x-th row of B as B x \u2190 arg min u\u2208\u2206 m\u22121 A x \u2212 m h=1 u h A a h 2 (1) \u2022 Set C = [A a1 . . . A am ] . Output: B and C such that B h = B \u03c1(h) and C h = C \u03c1(h) for some permutation \u03c1 : [m] \u2192 [m]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NMF", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "x \u2208 [n], B x \u2208 \u2206 m\u22121 . I.e.,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "For each", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Since rank(B) = rank(C) = m (by property 2 and 3), the matrix A must have rank m. Note that by property 1, each row of A is given by a convex combination of the rows of C:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "For each", |
| "sec_num": "1." |
| }, |
| { |
| "text": "for x \u2208 [n], A x = m h=1 B x,h \u00d7 C h", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "For each", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Furthermore, by property 2 each h \u2208 [m] has an associated row a h \u2208 [n] such that A a h = C a h . These properties can be exploited to recover B and C. A concrete algorithm for factorizing a matrix satisfying Condition 4.1 is given in Figure 1 (Arora et al., 2013) . It first identifies a 1 . . . a m (up to some permutation) by greedily locating the row of A furthest away from the subspace spanned by the vertices selected so far. Then it recovers each B x as the convex coefficients required to combine A a 1 . . . A am to yield A x . The latter computation (1) can be achieved with any constrained optimization method; we use the Frank-Wolfe algorithm (Frank and Wolfe, 1956) . See Arora et al. (2013) for a proof of the correctness of this algorithm.", |
| "cite_spans": [ |
| { |
| "start": 244, |
| "end": 264, |
| "text": "(Arora et al., 2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 656, |
| "end": 679, |
| "text": "(Frank and Wolfe, 1956)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 686, |
| "end": 705, |
| "text": "Arora et al. (2013)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 235, |
| "end": 243, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "For each", |
| "sec_num": "1." |
| }, |
| { |
| "text": "To derive our algorithm, we make use of certain random variables under the A-HMM. Let", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Random Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "(X 1 , . . . , X N ) \u2208 [n]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Random Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "N be a random sequence of N \u2265 2 observations drawn from the model, along with the corresponding hidden state sequence", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Random Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "(H 1 , . . . , H N ) \u2208 [m] N ; independently, pick a posi- tion I \u2208 [N \u2212 1] uniformly at random. Let Y I \u2208 R d be a d-dimensional vector which is conditionally in- dependent of X I given H I , i.e., P (Y I |H I , X I ) = P (Y I |H I )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Random Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": ". We will provide how to define such a variable in Section 4.4.1: Y I will be a function of (X 1 , . . . , X N ) serving as a \"context\" representation of X I revealing the hidden state H I .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Random Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Define unigram probabilities u \u221e , u 1 \u2208 R n and bigram probabilities B \u2208 R n\u00d7n where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Random Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "u \u221e x := P (X I = x) \u2200x \u2208 [n] u 1 x := P (X I = x|I = 1) \u2200x \u2208 [n] B x,x := P (X I = x, X I+1 = x ) \u2200x, x \u2208 [n]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Random Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Additionally, define\u03c0 \u2208 R m wher\u0113", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Random Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03c0 h = P (H I = h) \u2200h \u2208 [m]", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Random Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We assume\u03c0 h > 0 for all h \u2208 [m].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Random Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The following proposition provides a way to use the NMF algorithm in Figure 1 to recover the emission parameters O up to scaling.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 69, |
| "end": 77, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Proposition 4.1. Let X I \u2208 [n] and Y I \u2208 R d be respectively an observation and a context vector drawn from the random process described in Section 4.2. Define a matrix \u2126 \u2208 R n\u00d7d with rows", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2126 x = E[Y I |X I = x] \u2200x \u2208 [n]", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "If rank(\u2126) = m, then \u2126 satisfies Condition 4.1:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2126 = O\u0398", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "where O x,h = P (H I = h|X I = x) and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u0398 h = E[Y I |H I = h].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Proof.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "E[Y I |X I = x] = m h=1 P (H I = h|X I = x) \u00d7 E[Y I |H I = h, X I = x] = m h=1 P (H I = h|X I = x) \u00d7 E[Y I |H I = h]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The last equality follows by the conditional independence of Y I . This shows \u2126 = O\u0398. By the anchor assumption of the A-HMM, each h \u2208 [m] has at least one x \u2208 A(h) such that P (H I = h|X I = x) = 1, thus \u2126 satisfies Condition 4.1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "A useful interpretation of \u2126 in Proposition 4.1 is that its rows \u2126 1 . . . \u2126 n are d-dimensional vector representations of observation states forming a convex hull in R d . This convex hull has m vertices \u2126 a 1 . . . \u2126 am corresponding to anchors a h \u2208 A(h) which can be convexly combined to realize all", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2126 1 . . . \u2126 n .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Given O, we can recover the A-HMM parameters as follows. First, we recover the stationary state distribution\u03c0 as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u03c0 h = x\u2208[n] P (H I = h|X I = x) \u00d7 P (X I = x) = x\u2208[n] O x,h \u00d7 u \u221e x", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The emission parameters O are given by Bayes' theorem:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "O x,h = P (H I = h|X I = x) \u00d7 P (X I = x) x\u2208[n] P (H I = h|X I = x) \u00d7 P (X I = x) = O x,h \u00d7 u \u221e x \u03c0 h", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Using the fact that the emission probabilities are position-independent, we see that the initial state distribution \u03c0 satisfies u 1 = O\u03c0:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "u 1 x = P (X I = x|I = 1) = h\u2208[m] P (X I = x|H I = h, I = 1) \u00d7 P (H I = h|I = 1) = h\u2208[m] O x,h \u00d7 \u03c0 h Learn-Anchor-HMM Input: \u2126 in Proposition 4.1, number of hidden states m, bigram probabilities B, unigram probabilities u \u221e , u 1 \u2022 Compute ( O, \u0398) \u2190 Anchor-NMF(\u2126, m).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 Recover the parameters:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03c0 \u2190 O u \u221e (4) O \u2190 diag(\u03c0) \u22121 diag(u \u221e ) O (5) \u03c0 = O + u 1 (6) T \u2190 (diag(\u03c0) \u22121 O + B(O ) + )", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Output: A-HMM parameters (\u03c0, T, O) Thus \u03c0 can be recovered as \u03c0 = O + u 1 where O + is the Moore-Penrose pseudoinverse of O. Finally, it can be algebraically verified that B = Odiag(\u03c0)T O . Since all the involved matrices have rank m, we can directly solve for T as Figure 2 shows the complete algorithm. As input, it receives a matrix \u2126 satisfying Proposition 4.1, the number of hidden states, and the probabilities of observed unigrams and bigrams. It first decomposes \u2126 using the NMF algorithm in Figure 1 . Then it computes the A-HMM parameters whose solution is given analytically.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 266, |
| "end": 274, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 500, |
| "end": 508, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "T = (diag(\u03c0) \u22121 O + B(O ) + )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The following theorem guarantees the consistency of the algorithm. Theorem 4.1. Let (\u03c0, T, O) be an A-HMM such that rank(T ) = m and\u03c0 defined in (2) has strictly positive entries\u03c0 h > 0. Given random variables \u2126 satisfying Proposition 4.1 and B, u \u221e , u 1 under this model, the algorithm Learn-Anchor-HMM in Figure 2 outputs (\u03c0, T, O) up to a permutation on hidden states.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 308, |
| "end": 314, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Proof. By Proposition 4.1, \u2126 satisfies Condition 4.1 with \u2126 = O\u0398, thus O can be recovered up to a permutation on columns with the algorithm Anchor-NMF. The consistency of the recovered parameters follows from the correctness of (4-7) under the rank conditions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Derivation of a Learning Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Note that (6) and (7) require computing the pseudoinverse of the estimated O, which can be expensive and vulnerable to sampling errors in practice. To make our parameter estimation more robust, we can explicitly impose probability constraints. We recover \u03c0 by solving:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained Optimization for \u03c0 and T", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03c0 = arg min \u03c0 \u2208\u2206 m\u22121 u 1 \u2212 O\u03c0 2", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Constrained Optimization for \u03c0 and T", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "which can again be done with algorithms such as Frank-Wolfe. We recover T by maximizing the log likelihood of observation bigrams", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained Optimization for \u03c0 and T", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "x,x B x,x log \uf8eb \uf8ed h,h \u2208[m]\u03c0 h O x,h T h ,h O x ,h \uf8f6 \uf8f8 (9)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained Optimization for \u03c0 and T", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "subject to the constraint (T ) h \u2208 \u2206 m\u22121 . Since 9is concave in T with other parameters O and\u03c0 fixed, we can use EM to find the global optimum.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained Optimization for \u03c0 and T", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "In this section, we provide several ways to construct a convex hull \u2126 satisfying Proposition 4.1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of the Convex Hull \u2126", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In order to satisfy Proposition 4.1, we need to define the context variable Y I \u2208 R d with two properties:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choice of the Context Y I", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "\u2022 P (Y I |H I , X I ) = P (Y I |H I )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choice of the Context Y I", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "\u2022 The matrix \u2126 with rows", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choice of the Context Y I", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "\u2126 x = E[Y I |X I = x] \u2200x \u2208 [n]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choice of the Context Y I", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "has rank m.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choice of the Context Y I", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "A simple construction (Arora et al., 2013) is given by defining Y I \u2208 R n to be an indicator vector for the next observation:", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 42, |
| "text": "(Arora et al., 2013)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choice of the Context Y I", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "[Y I ] x = 1 if X I+1 = x 0 otherwise (10)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choice of the Context Y I", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "The first condition is satisfied since X I+1 does not depend on X I given H I . For the second condition, observe that \u2126 x,x = P (X I+1 = x |X I = x), or in matrix form", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choice of the Context Y I", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2126 = diag (u \u221e ) \u22121 B", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Choice of the Context Y I", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "Under the rank conditions in Theorem 4.1, (11) has rank m. More generally, we can let Y I be an observation (encoded as an indicator vector as in (10)) randomly drawn from a window of L \u2208 N nearby observations. We can either only use the identity of the chosen observation (in which case Y I \u2208 R n ) or additionally indicate the relative position in the window (in which case Y I \u2208 R nL ). It is straightforward to verify that the above two conditions are satisfied under these definitions. Clearly, (11) is a special case with L = 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choice of the Context Y I", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "With the definition of \u2126 in the previous section, the dimension of \u2126 x is d = O(n) which can be difficult to work with when n m. Proposition 4.1 allows us to reduce the dimension as long as the final matrix retains the form in (3) and has rank m. In particular, we can multiply \u2126 by any rank-m projection matrix \u03a0 \u2208 R d\u00d7m on the right side: if \u2126 satisfies the properties in Proposition 4.1, then so does \u2126\u03a0 with m-dimensional rows", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reducing the Dimension of \u2126 x", |
| "sec_num": "4.4.2" |
| }, |
| { |
| "text": "(\u2126\u03a0) x = E[Y I \u03a0|X I = x]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reducing the Dimension of \u2126 x", |
| "sec_num": "4.4.2" |
| }, |
| { |
| "text": "Since rank(\u2126) = m, a natural choice of \u03a0 is the projection onto the best-fit m-dimensional subspace of the row space of \u2126.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reducing the Dimension of \u2126 x", |
| "sec_num": "4.4.2" |
| }, |
| { |
| "text": "We mention that previous works on the NMFlearning framework have employed various projection methods, but they do not examine relative merits of their choices. For instance, Arora et al. (2013) simply use random projection, which is convenient for theoretical analysis. Cohen and Collins (2014) use a projection based on canonical correlation analysis (CCA) without further exploration. In contrast, we give a full comparison of valid construction methods and find that the choice of \u2126 is crucial in practice.", |
| "cite_spans": [ |
| { |
| "start": 174, |
| "end": 193, |
| "text": "Arora et al. (2013)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reducing the Dimension of \u2126 x", |
| "sec_num": "4.4.2" |
| }, |
| { |
| "text": "We can formulate an alternative way to construct a valid \u2126 when the model is further restricted to be a Brown model. Since every observation is an anchor, O x \u2208 R m has a single nonzero entry for every x. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "B := diag (u \u221e ) \u22121/2 Bdiag (u \u221e ) \u22121/2 B := diag \u221a u \u221e \u22121/2 \u221a Bdiag \u221a u \u221e \u22121/2 Singular Vectors: U (M ) (V (M ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "is an n \u00d7 m matrix of the left (right) singular vectors of M corresponding to the largest m singular values", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "\u2022 If \u03c4 = brown: set \u2126 \u2190 diag (u \u221e ) \u22121 B\u03a0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "where the projection matrix \u03a0 \u2208 R n\u00d7m is given by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "\u03a0 i,j \u223c N (0, 1/m) if \u03c4 = random \u03a0 = V (diag (u \u221e ) \u22121 B) if \u03c4 = best-fit \u03a0 = diag (u \u221e ) \u22121/2 V (B) if \u03c4 = cca", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "\u2022 If \u03c4 = brown: compute the transformed emission matrix as f (O) = U ( B) and set", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "\u2126 \u2190 diag(v) \u22121 f (O)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "v x := ||f (O) x || 2 is the length of the x-th row of f (O).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "Output: \u2126 \u2208 R n\u00d7m in Proposition 4.1 Figure 3 : Algorithm for constructing a valid \u2126 with different construction methods. For simplicity, we only show the bigram construction (context size L = 1), but an extension for larger context (L > 1) is straightforward.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 37, |
| "end": 45, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "a trivial convex hull in which every point is a vertex. This corresponds to choosing an oracle context", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "Y I \u2208 R m where [Y I ] h = 1 if H I = h 0 otherwise", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "It is possible to recover the Brown model parameters O up to element-wise scaling and rotation of rows using the algorithm of Stratos et al. (2015) . More specifically, let f (O) \u2208 R n\u00d7m denote the output of their algorithm. Then they show that for some vector s \u2208 R m with strictly positive entries and an orthogonal matrix Q \u2208 R m\u00d7m :", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 147, |
| "text": "Stratos et al. (2015)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "f (O) = O 1/4 diag(s)Q", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "where O 1/4 is an element-wise exponentiation of O by 1/4. Since the rows of f (O) are simply some scaling and rotation of the rows of O, using", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "\u2126 x = f (O) x / ||f (O) x || yields a valid \u2126.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "While we need to impose an additional assumption (the Brown model restriction) in order to justify this choice of \u2126, we find in our experiments that it performs better than other alternatives. We speculate that this is because a Brown model is rather appropriate for the POS tagging task; many words are indeed unambiguous with respect to POS tags (Table 5). Also, the general effectiveness of f (O) for representational purposes has been demostrated in previous works (Stratos et al., 2014; Stratos et al., 2015) . By restricting the A-HMM to be a Brown model, we can piggyback on the proven effectiveness of f (O). Figure 3 shows an algorithm for constructing \u2126 with these different construction methods. For simplicity, we only show the bigram construction (context size L = 1), but an extension for larger context (L > 1) is straightforward as discussed earlier. The construction methods random (random projection), best-fit (projection to the best-fit subspace), and cca (CCA projection) all compute (11) and differ only in how the dimension is reduced. The construction method brown computes the transformed Brown parameters f (O) as the left singular vectors of a scaled covariance matrix and then normalizes its rows. We direct the reader to Stratos et al. (2015) for a derivation of this calculation.", |
| "cite_spans": [ |
| { |
| "start": 469, |
| "end": 491, |
| "text": "(Stratos et al., 2014;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 492, |
| "end": 513, |
| "text": "Stratos et al., 2015)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 1250, |
| "end": 1271, |
| "text": "Stratos et al. (2015)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 617, |
| "end": 625, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Construction of \u2126 for the Brown Model", |
| "sec_num": "4.4.3" |
| }, |
| { |
| "text": "The x-th row of \u2126 is a d-dimensional vector representation of x lying in a convex set with m vertices. This suggests a natural way to incorporate domainspecific features: we can add additional dimensions that provide information about hidden states from the surface form of x.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2126 with Feature Augmentation", |
| "sec_num": "4.4.4" |
| }, |
| { |
| "text": "For instance, consider the the POS tagging task. In the simple construction (11), the representation of word x is defined in terms of neighboring words x :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2126 with Feature Augmentation", |
| "sec_num": "4.4.4" |
| }, |
| { |
| "text": "[\u2126 x ] x = E 1 X I+1 = x |X I = x", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2126 with Feature Augmentation", |
| "sec_num": "4.4.4" |
| }, |
| { |
| "text": "where 1(\u2022) \u2208 {0, 1} is the indicator function. We can augment this vector with s additional dimen-sions indicating the spelling features of x. For instance, the (n + 1)-th dimension may be defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2126 with Feature Augmentation", |
| "sec_num": "4.4.4" |
| }, |
| { |
| "text": "[\u2126 x ] n+1 = E [1 (x ends in \"ing\" ) |X I = x]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2126 with Feature Augmentation", |
| "sec_num": "4.4.4" |
| }, |
| { |
| "text": "This value will be generally large for verbs and small for non-verbs, nudging verbs closer together and away from non-verbs. The modified (n + s)dimensional representation is followed by the usual dimension reduction. Note that the spelling features are a deterministic function of a word, and we are implicitly assuming that they are independent of the word given its tag. While this is of course not true in practice, we find that these features can significantly boost the tagging performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2126 with Feature Augmentation", |
| "sec_num": "4.4.4" |
| }, |
| { |
| "text": "We evaluate our A-HMM learning algorithm on the task of unsupervised POS tagging. The goal of this task is to induce the correct sequence of POS tags (hidden states) given a sequence of words (observation states). The anchor condition corresponds to assuming that each POS tag has at least one word that occurs only under that tag.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Unsupervised POS tagging has long been an active area of research (Smith and Eisner, 2005a; Johnson, 2007; Toutanova and Johnson, 2007; Haghighi and Klein, 2006; Berg-Kirkpatrick et al., 2010) , but results on this task are complicated by varying assumptions and unclear evaluation metrics (Christodoulopoulos et al., 2010) . Rather than addressing multiple alternatives for evaluating unsupervised POS tagging, we focus on a simple and widely used metric: many-to-one accuracy (i.e., we map each hidden state to the most frequently coinciding POS tag in the labeled data and compute the resulting accuracy).", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 91, |
| "text": "(Smith and Eisner, 2005a;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 92, |
| "end": 106, |
| "text": "Johnson, 2007;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 107, |
| "end": 135, |
| "text": "Toutanova and Johnson, 2007;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 136, |
| "end": 161, |
| "text": "Haghighi and Klein, 2006;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 162, |
| "end": 192, |
| "text": "Berg-Kirkpatrick et al., 2010)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 290, |
| "end": 323, |
| "text": "(Christodoulopoulos et al., 2010)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Unsupervised POS Tagging", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Vanilla HMMs are notorious for their mediocre performance on this task, and it is well known that they perform poorly largely because of model misspecification, not because of suboptimal parameter estimation (e.g., because EM gets stuck in local optima). More generally, a large body of work points to the inappropriateness of simple generative models for unsupervised induction of linguistic structure (Merialdo, 1994; Smith and Eisner, 2005b; Liang and Klein, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 403, |
| "end": 419, |
| "text": "(Merialdo, 1994;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 420, |
| "end": 444, |
| "text": "Smith and Eisner, 2005b;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 445, |
| "end": 467, |
| "text": "Liang and Klein, 2008)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Better Model v.s. Better Learning", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "Consequently, many works focus on using more expressive models such as log-linear models (Smith and Eisner, 2005a; Berg-Kirkpatrick et al., 2010) and Markov random fields (MRF) (Haghighi and Klein, 2006) . These models are shown to deliver good performance even though learning is approximate. Thus one may question the value of having a consistent estimator for A-HMMs and Brown models in this work: if the model is wrong, what is the point of learning it accurately?", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 114, |
| "text": "(Smith and Eisner, 2005a;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 115, |
| "end": 145, |
| "text": "Berg-Kirkpatrick et al., 2010)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 177, |
| "end": 203, |
| "text": "(Haghighi and Klein, 2006)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Better Model v.s. Better Learning", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "However, there is also ample evidence that HMMs are competitive for unsupervised POS induction when they incorporate domain-specific structures. Johnson (2007) is able to outperform the sophisticated MRF model of Haghighi and Klein (2006) on one-to-one accuracy by using a sparse prior in HMM estimation. The clustering method of Brown et al. (1992) which is based on optimizing the likelihood under the Brown model (a special case of HMM) remains a baseline difficult to outperform (Christodoulopoulos et al., 2010) .", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 159, |
| "text": "Johnson (2007)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 213, |
| "end": 238, |
| "text": "Haghighi and Klein (2006)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 330, |
| "end": 349, |
| "text": "Brown et al. (1992)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 483, |
| "end": 516, |
| "text": "(Christodoulopoulos et al., 2010)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Better Model v.s. Better Learning", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "We add to this evidence by demonstrating the effectiveness of A-HMMs on this task. We also check the anchor assumption on data and show that the A-HMM model structure is in fact appropriate for the problem (Table 5) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 206, |
| "end": 215, |
| "text": "(Table 5)", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Better Model v.s. Better Learning", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "We use the universal treebank dataset (version 2.0) which contains sentences annotated with 12 POS tag types for 10 languages (McDonald et al., 2013) . All models are trained with 12 hidden states. We use the English portion to experiment with different hyperparameter configurations. At test time, we fix a configuration (based on the English portion) and apply it across all languages.", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 149, |
| "text": "(McDonald et al., 2013)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setting", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The list of compared methods is given below:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setting", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "BW The Baum-Welch algorithm, an EM algorithm for HMMs (Baum and Petrie, 1966) .", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 77, |
| "text": "(Baum and Petrie, 1966)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setting", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "CLUSTER A parameter estimation scheme for HMMs based on Brown clustering (Brown et al., 1992) . We run the Brown clustering algorithm 1 to obtain 12 word clusters C 1 . . . C 12 . Then we set the emission parameters o(x|h), transition parameters t(h |h), and prior \u03c0(h) to be the maximumlikelihood estimates under the fixed clusters.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 93, |
| "text": "(Brown et al., 1992)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setting", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "ANCHOR Our algorithm Learn-Anchor-HMM in Figure 2 but with the constrained optimization (8) and (9) for estimating \u03c0 and T . 2 ANCHOR-FEATURES Same as ANCHOR but employs the feature augmentation scheme described in Section 4.4.4.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 41, |
| "end": 49, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Setting", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The unsupervised log-linear model described in Berg-Kirkpatrick et al. (2010) . Instead of emission parameters o(x|h), the model maintains a miniature log-linear model with a weight vector w and a feature function \u03c6. The probability of a word x given tag h is computed as", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 77, |
| "text": "Berg-Kirkpatrick et al. (2010)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LOG-LINEAR", |
| "sec_num": null |
| }, |
| { |
| "text": "p(x|h) = exp(w \u03c6(x, h)) x\u2208[n] exp(w \u03c6(x, h))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LOG-LINEAR", |
| "sec_num": null |
| }, |
| { |
| "text": "The model can be trained by maximizing the likelihood of observed sequences. We use L-BFGS to directly optimize this objective. 3 This approach obtains the current state-of-the-art accuracy on finegrained (45 tags) English WSJ dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LOG-LINEAR", |
| "sec_num": null |
| }, |
| { |
| "text": "We use maximum marginal decoding for HMM predictions: i.e., at each position, we predict the most likely tag given the entire sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LOG-LINEAR", |
| "sec_num": null |
| }, |
| { |
| "text": "In our experiments, we find that Anchor-NMF (Figure 1) tends to propose extremely rare words as anchors. A simple fix is to search for anchors only among relatively frequent words. We find that any reasonable frequency threshold works well; we use the 300 most frequent words. Note that this is not a problem if these 300 words include anchor words corresponding to all the 12 tags.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 44, |
| "end": 54, |
| "text": "(Figure 1)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Practical Issues with the Anchor Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We must define the context for constructing \u2126. We use the previous and next words (i.e., context size L = 2) marked with relative positions. Thus \u2126 has 2n columns before dimension reduction. Table 1 : Many-to-one accuracy on the English data with different choices of the convex hull \u2126 (Figure 3) . These results do not use spelling features.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 191, |
| "end": 198, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 286, |
| "end": 296, |
| "text": "(Figure 3)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Practical Issues with the Anchor Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "construction (\u03c4 = brown in Figure 3 ) clearly performs the best: essentially, the anchor algorithm is used to extract the HMM parameters from the CCAbased word embeddings of Stratos et al. (2015) . We also explore feature augmentation discussed in Section 4.4.4. For comparison, we employ the same word features used by Berg-Kirkpatrick et al. (2010):", |
| "cite_spans": [ |
| { |
| "start": 174, |
| "end": 195, |
| "text": "Stratos et al. (2015)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 27, |
| "end": 35, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Practical Issues with the Anchor Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "\u2022 Indicators for whether a word is capitalized, contains a hyphen, or contains a digit", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Practical Issues with the Anchor Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "\u2022 Suffixes of length 1, 2, and 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Practical Issues with the Anchor Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We weigh the l 2 norm of these extra dimensions in relation to the original dimensions: we find a small weight (e.g., 0.1 of the norm of the original dimensions) works well. We also find that these features can sometimes significantly improve the performance. For instance, the accuracy on the English portion can be improved from 66.1% to 71.4% with feature augmentation. Another natural experiment is to refine the HMM parameters obtained from the anchor algorithm (or Brown clusters) with a few iterations of the Baum-Welch algorithm. In our experiments, however, it did not significantly improve the tagging performance, so we omit this result. Table 2 shows the many-to-one accuracy on all languages in the dataset. For the Baum-Welch algorithm and the unsupervised log-linear models, we report the mean and the standard deviation (in parentheses) of 10 random restarts run for 1,000 iterations.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 649, |
| "end": 656, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Practical Issues with the Anchor Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Both ANCHOR and ANCHOR-FEATURES compete favorably. On 5 out of 10 languages, ANCHOR-FEATURES achieves the highest accuracy, often 56.7 Table 2 : Many-to-one accuracy on each language using 12 universal tags. The first four models are HMMs estimated with the Baum-Welch algorithm (BW), the clustering algorithm of Brown et al. (1992) , the anchor algorithm without (ANCHOR) and with (ANCHOR-FEATURES) feature augmentation. LOG-LINEAR is the model of Berg-Kirkpatrick et al. 2010trained with the direct-gradient method using L-BFGS. For BW and LOG-LINEAR, we report the mean and the standard deviation (in parentheses) of 10 random restarts run for 1,000 iterations.", |
| "cite_spans": [ |
| { |
| "start": 313, |
| "end": 332, |
| "text": "Brown et al. (1992)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 135, |
| "end": 142, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tagging Accuracy", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "closely followed by ANCHOR. The Brown clustering estimation is also competitive and has the highest accuracy on 3 languages. Not surprisingly, vanilla HMMs trained with BW perform the worst (see Section 5.1.1 for a discussion).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagging Accuracy", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "LOG-LINEAR is a robust baseline and performs the best on the remaining 2 languages. It performs especially strongly on Japanese and Korean datasets in which poorly segmented strings such as \"1950\u5e7411\u67085\u65e5\u306b\u306f\" (on November 5, 1950) and \"40.3%\u1105 \u1169\" (by 40.3%) abound. In these datasets, it is crucial to make effective use of morphological features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tagging Accuracy", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "An A-HMM can be easily interpreted since each hidden state is marked with an anchor observation. Table 3 shows the 12 anchors found in each language. Note that these anchor words generally have a wide coverage of possible POS tags.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 97, |
| "end": 104, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A-HMM Parameters", |
| "sec_num": "5.5.1" |
| }, |
| { |
| "text": "We also experimented with using true anchor words (obtained from labeled data), but they did not improve performance over automatically induced anchors. Since anchor discovery is inherently tied to parameter estimation, it is better to obtain anchors in a data-driven manner. In particular, certain POS tags (e.g., X) appear quite infrequently, and the model is worse off by being forced to allocate a hidden state for such a tag. Table 4 shows words with highest emission probabilities o(x|h) under each anchor. We observe that an anchor is representative of a certain group of words. For instance, the state \"loss\" represents noun-like words, \"1\" represents numbers, \"on\" represents preposition-like words, \"one\" represents determiner-like words, and \"closed\" represents verb-like words. The conditional distribution is peaked for anchors that represent function tags (e.g., determiners, punctuation) and flat for anchors that represent content tags (e.g., nouns). Occasionally, an anchor assigns high probabilities to words that do not seem to belong to the corresponding POS tag. But this is to be expected since o(x|h) \u221d P (X I = x) is generally larger for frequent words. Table 5 checks the assumptions in A-HMMs and Brown models on the universal treebank dataset. The anchor assumption is indeed satisfied with 12 universal tags: in every language, each tag has at least one word uniquely associated with the tag. The Brown assumption (each word has exactly one possible tag) is of course not satisfied, since some words are genuinely ambiguous with respect to their POS tags. However, the percentage of unambiguous words is very high (well over 90%). This analysis supports that the model assumptions made by A-HMMs and Brown models are appropriate for POS tagging. Table 6 reports the log likelihood (normalized by the number of words) on the English portion of different estimation methods for HMMs. BW and CLUSTER obtain higher likelihood than the anchor algorithm, but this is expected given that both EM There has recently been great progress in estimation of models with latent variables. Despite the NP-hardness in general cases (Terwijn, 2002; Arora et al., 2012) , many algorithms with strong theoretical guarantees have emerged under natural assumptions. For example, for HMMs with full-rank conditions, derive a consistent estimator of the marginal distribution of observed sequences. Anandkumar et al. (2014) propose an exact tensor decomposition method for learning a wide class of latent variable models with similar non-degeneracy conditions. Arora et al. (2013) derive a provably cor-rect learning algorithm for topic models with a certain parameter structure. The anchor-based framework has been originally formulated for learning topic models (Arora et al., 2013) . It has been subsequently adopted to learn other models such as latent-variable probabilistic context-free grammars (Cohen and Collins, 2014) . In our work, we have extended this framework to address unsupervised sequence labeling. Zhou et al. (2014) also extend Arora et al. (2013)'s framework to learn various models including HMMs, but they address a more general problem. Consequently, their algorithm draws from Anandkumar et al. (2012) and is substantially different from ours.", |
| "cite_spans": [ |
| { |
| "start": 2144, |
| "end": 2159, |
| "text": "(Terwijn, 2002;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 2160, |
| "end": 2179, |
| "text": "Arora et al., 2012)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 2404, |
| "end": 2428, |
| "text": "Anandkumar et al. (2014)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 2566, |
| "end": 2585, |
| "text": "Arora et al. (2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 2769, |
| "end": 2789, |
| "text": "(Arora et al., 2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 2907, |
| "end": 2932, |
| "text": "(Cohen and Collins, 2014)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 3023, |
| "end": 3041, |
| "text": "Zhou et al. (2014)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 3208, |
| "end": 3232, |
| "text": "Anandkumar et al. (2012)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 431, |
| "end": 438, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 1178, |
| "end": 1185, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 1774, |
| "end": 1781, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A-HMM Parameters", |
| "sec_num": "5.5.1" |
| }, |
| { |
| "text": "Unsupervised POS tagging is a classic problem in unsupervised learning that has been tackled with various approaches. Johnson (2007) Table 6 : Log likelihood normalized by the number of words on English (along with accuracy). For BW, we report the mean of 10 random restarts run for 1,000 iterations.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 132, |
| "text": "Johnson (2007)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 133, |
| "end": 140, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Unsupervised POS Tagging", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "EM performs poorly in this task because it induces flat distributions; this is not the case with our algorithm as seen in the peaky distributions in Table 4 . Haghighi and Klein (2006) assume a set of prototypical words for each tag and report high accuracy.", |
| "cite_spans": [ |
| { |
| "start": 159, |
| "end": 184, |
| "text": "Haghighi and Klein (2006)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 149, |
| "end": 156, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Unsupervised POS Tagging", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "In contrast, our algorithm automatically finds such prototypes in a subroutine. Berg-Kirkpatrick et al. (2010) achieve the stateof-the-art result in unsupervised fine-grained POS tagging (mid-70%). As described in Section 5.2, their model is an HMM in which probabilties are given by log-linear models. Table 7 provides a point of reference comparing our work with Berg-Kirkpatrick et al. (2010) in their setting: models are trained and tested on the entire 45-tag WSJ dataset. Their model outperforms our approach in this setting: with fine-grained tags, spelling features become more important, for instance to distinguish \"played\" (VBD) from \"play\" (VBZ). Nonetheless, we have shown that our approach is competitive when universal tags are used (Table 2) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 303, |
| "end": 310, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 748, |
| "end": 757, |
| "text": "(Table 2)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Unsupervised POS Tagging", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Many past works on POS induction predate the introduction of the universal tagset by Petrov et al. (2012) and thus report results with fine-grained tags. More recent works adopt the universal tagset but 74.9 (1.5) Table 7 : Many-to-one accuracy on the English data with 45 original tags. We use the same setting as in Table 2 . For BW and LOG-LINEAR, we report the mean and the standard deviation (in parentheses) of 10 random restarts run for 1,000 iterations. they leverage additional resources. For instance, Das and Petrov (2011) and T\u00e4ckstr\u00f6m et al. (2013) use parallel data to project POS tags from a supervised source language. Li et al. (2012) use tag dictionaries built from Wiktionary. Thus their results are not directly comparable to ours. 4", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 105, |
| "text": "Petrov et al. (2012)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 512, |
| "end": 533, |
| "text": "Das and Petrov (2011)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 538, |
| "end": 561, |
| "text": "T\u00e4ckstr\u00f6m et al. (2013)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 635, |
| "end": 651, |
| "text": "Li et al. (2012)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 214, |
| "end": 221, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 318, |
| "end": 325, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Unsupervised POS Tagging", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We have presented an exact estimation method for learning anchor HMMs from unlabeled data. There are several directions for future work. An important direction is to extend the method to a richer family of models such as log-linear models or neural networks. Another direction is to further generalize the method to handle a wider class of HMMs by relaxing the anchor condition (Condition 4.1). This will require a significant extension of the NMF algorithm in Figure 1. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 461, |
| "end": 470, |
| "text": "Figure 1.", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Transactions of the Association for Computational Linguistics, vol. 4, pp. 245-257, 2016. Action Editor: Hinrich Sch\u00fctze.Submission batch: 1/2016; Revision batch: 3/2016; Published 6/2016. c 2016 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We use the implementation ofLiang (2005).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/karlstratos/anchor 3 We use the implementation of Berg-Kirkpatrick et al. (2010) (personal communication).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Das and Petrov (2011) conduct unsupervised experiments using the model of Berg-Kirkpatrick et al. (2010), but their dataset and evaluation method differ from ours.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank Taylor Berg-Kirkpatrick for providing the implementation of Berg-Kirkpatrick et al. (2010) . We also thank anonymous reviewers for their constructive comments.", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 99, |
| "text": "Berg-Kirkpatrick et al. (2010)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A method of moments for mixture models and hidden Markov models", |
| "authors": [ |
| { |
| "first": "Animashree", |
| "middle": [], |
| "last": "Anandkumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sham", |
| "middle": [ |
| "M" |
| ], |
| "last": "Kakade", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Twenty-Fifth Annual Conference on Learning Theory", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Animashree Anandkumar, Daniel Hsu, and Sham M. Kakade. 2012. A method of moments for mixture models and hidden Markov models. In Twenty-Fifth Annual Conference on Learning Theory.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Tensor decompositions for learning latent variable models", |
| "authors": [ |
| { |
| "first": "Animashree", |
| "middle": [], |
| "last": "Anandkumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Rong", |
| "middle": [], |
| "last": "Ge", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Sham", |
| "suffix": "" |
| }, |
| { |
| "first": "Matus", |
| "middle": [], |
| "last": "Kakade", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Telgarsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "15", |
| "issue": "1", |
| "pages": "2773--2832", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. 2014. Ten- sor decompositions for learning latent variable models. Journal of Machine Learning Research, 15(1):2773- 2832.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Learning topic models-going beyond SVD", |
| "authors": [ |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Arora", |
| "suffix": "" |
| }, |
| { |
| "first": "Rong", |
| "middle": [], |
| "last": "Ge", |
| "suffix": "" |
| }, |
| { |
| "first": "Ankur", |
| "middle": [], |
| "last": "Moitra", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Foundations of Computer Science (FOCS), 2012 IEEE 53rd Annual Symposium on", |
| "volume": "", |
| "issue": "", |
| "pages": "1--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sanjeev Arora, Rong Ge, and Ankur Moitra. 2012. Learning topic models-going beyond SVD. In Foun- dations of Computer Science (FOCS), 2012 IEEE 53rd Annual Symposium on, pages 1-10. IEEE.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A practical algorithm for topic modeling with provable guarantees", |
| "authors": [ |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Arora", |
| "suffix": "" |
| }, |
| { |
| "first": "Rong", |
| "middle": [], |
| "last": "Ge", |
| "suffix": "" |
| }, |
| { |
| "first": "Yonatan", |
| "middle": [], |
| "last": "Halpern", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mimno", |
| "suffix": "" |
| }, |
| { |
| "first": "Ankur", |
| "middle": [], |
| "last": "Moitra", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Sontag", |
| "suffix": "" |
| }, |
| { |
| "first": "Yichen", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 30th International Conference on Machine Learning (ICML-13)", |
| "volume": "", |
| "issue": "", |
| "pages": "280--288", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sanjeev Arora, Rong Ge, Yonatan Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. 2013. A practical algorithm for topic modeling with provable guarantees. In Proceedings of the 30th International Conference on Machine Learn- ing (ICML-13), pages 280-288.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Statistical inference for probabilistic functions of finite state Markov chains. The Annals of Mathematical Statistics", |
| "authors": [ |
| { |
| "first": "Leonard", |
| "middle": [ |
| "E" |
| ], |
| "last": "Baum", |
| "suffix": "" |
| }, |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Petrie", |
| "suffix": "" |
| } |
| ], |
| "year": 1966, |
| "venue": "", |
| "volume": "37", |
| "issue": "", |
| "pages": "1554--1563", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leonard E. Baum and Ted Petrie. 1966. Statisti- cal inference for probabilistic functions of finite state Markov chains. The Annals of Mathematical Statis- tics, 37(6):1554-1563.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Painless unsupervised learning with features", |
| "authors": [ |
| { |
| "first": "Taylor", |
| "middle": [], |
| "last": "Berg-Kirkpatrick", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Bouchard-C\u00f4t\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Denero", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "582--590", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taylor Berg-Kirkpatrick, Alexandre Bouchard-C\u00f4t\u00e9, John DeNero, and Dan Klein. 2010. Painless unsu- pervised learning with features. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Com- putational Linguistics, pages 582-590. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Classbased n-gram models of natural language", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [ |
| "V" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "L" |
| ], |
| "last": "Desouza", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jenifer", |
| "middle": [ |
| "C" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lai", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Computational Linguistics", |
| "volume": "18", |
| "issue": "4", |
| "pages": "467--479", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter F. Brown, Peter V. Desouza, Robert L. Mercer, Vin- cent J. Della Pietra, and Jenifer C. Lai. 1992. Class- based n-gram models of natural language. Computa- tional Linguistics, 18(4):467-479.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Two decades of unsupervised POS induction: How far have we come?", |
| "authors": [ |
| { |
| "first": "Christos", |
| "middle": [], |
| "last": "Christodoulopoulos", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "575--584", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. 2010. Two decades of unsupervised POS induction: How far have we come? In Proceed- ings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 575-584. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A provably correct learning algorithm for latent-variable PCFGs", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Shay", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shay B. Cohen and Michael Collins. 2014. A provably correct learning algorithm for latent-variable PCFGs. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Unsupervised part-of-speech tagging with bilingual graph-based projections", |
| "authors": [ |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "600--609", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based pro- jections. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 600- 609. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "An algorithm for quadratic programming", |
| "authors": [ |
| { |
| "first": "Marguerite", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Wolfe", |
| "suffix": "" |
| } |
| ], |
| "year": 1956, |
| "venue": "Naval Research Logistics Quarterly", |
| "volume": "3", |
| "issue": "1-2", |
| "pages": "95--110", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marguerite Frank and Philip Wolfe. 1956. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3(1-2):95-110.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Prototype-driven learning for sequence models", |
| "authors": [ |
| { |
| "first": "Aria", |
| "middle": [], |
| "last": "Haghighi", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "320--327", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of the main conference on Human Language Technol- ogy Conference of the North American Chapter of the Association of Computational Linguistics, pages 320- 327. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A spectral algorithm for learning hidden Markov models", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Sham", |
| "suffix": "" |
| }, |
| { |
| "first": "Tong", |
| "middle": [], |
| "last": "Kakade", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Journal of Computer and System Sciences", |
| "volume": "78", |
| "issue": "5", |
| "pages": "1460--1480", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Hsu, Sham M. Kakade, and Tong Zhang. 2012. A spectral algorithm for learning hidden Markov mod- els. Journal of Computer and System Sciences, 78(5):1460-1480.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Why doesn't EM find good HMM POS-taggers?", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "296--305", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Johnson. 2007. Why doesn't EM find good HMM POS-taggers? In EMNLP-CoNLL, pages 296-305.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Wikily supervised part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "Shen", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Joao", |
| "middle": [ |
| "V" |
| ], |
| "last": "Gra\u00e7a", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1389--1398", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shen Li, Joao V. Gra\u00e7a, and Ben Taskar. 2012. Wiki- ly supervised part-of-speech tagging. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1389-1398. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Analyzing the errors of unsupervised learning", |
| "authors": [ |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "879--887", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Percy Liang and Dan Klein. 2008. Analyzing the errors of unsupervised learning. In ACL, pages 879-887.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Semi-supervised learning for natural language", |
| "authors": [ |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Percy Liang. 2005. Semi-supervised learning for natural language. Master's thesis, Massachusetts Institute of Technology.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Oscar T\u00e4ckstr\u00f6m, Claudia Bedini, N\u00faria B. Castell\u00f3, and Jungmee Lee", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [ |
| "T" |
| ], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Yvonne", |
| "middle": [], |
| "last": "Quirmbach-Brundage", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuzman", |
| "middle": [], |
| "last": "Ganchev", |
| "suffix": "" |
| }, |
| { |
| "first": "Keith", |
| "middle": [ |
| "B" |
| ], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "92--97", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan T. McDonald, Joakim Nivre, Yvonne Quirmbach- Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith B. Hall, Slav Petrov, Hao Zhang, Os- car T\u00e4ckstr\u00f6m, Claudia Bedini, N\u00faria B. Castell\u00f3, and Jungmee Lee. 2013. Universal dependency annota- tion for multilingual parsing. In ACL, pages 92-97.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Tagging English text with a probabilistic model", |
| "authors": [ |
| { |
| "first": "Bernard", |
| "middle": [], |
| "last": "Merialdo", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Computational Linguistics", |
| "volume": "20", |
| "issue": "2", |
| "pages": "155--171", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernard Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20(2):155-171.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "A universal part-of-speech tagset", |
| "authors": [ |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of the Eighth International Conference on Language Re- sources and Evaluation (LREC'12).", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Contrastive estimation: Training log-linear models on unlabeled data", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Noah", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "354--362", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noah A. Smith and Jason Eisner. 2005a. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 354-362. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Guiding unsupervised grammar induction using contrastive estimation", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Noah", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of IJCAI Workshop on Grammatical Inference Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "73--82", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noah A. Smith and Jason Eisner. 2005b. Guiding un- supervised grammar induction using contrastive esti- mation. In Proc. of IJCAI Workshop on Grammatical Inference Applications, pages 73-82.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "A spectral algorithm for learning classbased n-gram models of natural language", |
| "authors": [ |
| { |
| "first": "Karl", |
| "middle": [], |
| "last": "Stratos", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the thirtieth conference on Uncertainty in Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karl Stratos, Do-kyum Kim, Michael Collins, and Daniel Hsu. 2014. A spectral algorithm for learning class- based n-gram models of natural language. In Proceed- ings of the thirtieth conference on Uncertainty in Arti- ficial Intelligence.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Model-based word embeddings from decompositions of count matrices", |
| "authors": [ |
| { |
| "first": "Karl", |
| "middle": [], |
| "last": "Stratos", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1282--1291", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karl Stratos, Michael Collins, and Daniel Hsu. 2015. Model-based word embeddings from decompositions of count matrices. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguis- tics and the 7th International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), pages 1282-1291, Beijing, China, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Token and type constraints for cross-lingual part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "Oscar", |
| "middle": [], |
| "last": "T\u00e4ckstr\u00f6m", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mc-Donald", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, Slav Petrov, Ryan Mc- Donald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Association for Computational Linguistics, 1:1-12.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "On the learnability of hidden Markov models", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Sebastiaan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Terwijn", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Grammatical Inference: Algorithms and Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "261--268", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastiaan A. Terwijn. 2002. On the learnability of hid- den Markov models. In Grammatical Inference: Al- gorithms and Applications, pages 261-268. Springer.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "A Bayesian LDA-based model for semi-supervised partof-speech tagging", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "1521--1528", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Toutanova and Mark Johnson. 2007. A Bayesian LDA-based model for semi-supervised part- of-speech tagging. In Advances in Neural Information Processing Systems, pages 1521-1528.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "On the complexity of nonnegative matrix factorization", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Stephen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vavasis", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "SIAM Journal on Optimization", |
| "volume": "20", |
| "issue": "3", |
| "pages": "1364--1377", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen A. Vavasis. 2009. On the complexity of nonneg- ative matrix factorization. SIAM Journal on Optimiza- tion, 20(3):1364-1377.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Divide-and-conquer learning by anchoring a conical hull", |
| "authors": [ |
| { |
| "first": "Tianyi", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [ |
| "A" |
| ], |
| "last": "Bilmes", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Guestrin", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "1242--1250", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tianyi Zhou, Jeff A. Bilmes, and Carlos Guestrin. 2014. Divide-and-conquer learning by anchoring a conical hull. In Advances in Neural Information Processing Systems, pages 1242-1250.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Definition 3.2. A Brown model is an A-HMM in which A(1) . . . A(m) partition [n]." |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Non-negative matrix factorization algorithm ofArora et al. (2012)." |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "NMF-based learning algorithm for A-HMMs. The algorithm Anchor-NMF is given inFigure 1." |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Thus the rows defined by \u2126 x = O x / ||O x || (an indicator vector for the unique hidden state of x) form Input: bigram probabilities B, unigram probabilities u \u221e , number of hidden states m, construction method \u03c4 Scaled Matrices: ( \u221a \u2022 is element-wise)" |
| }, |
| "TABREF1": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>shows the performance on the English portion with</td></tr><tr><td>different construction methods for \u2126. The Brown</td></tr></table>", |
| "text": "", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>(.01)</td></tr></table>", |
| "text": "Anchor words found in each language (model ANCHOR-FEATURES).loss year (.02) market (.01) share (.01) company (.01) stock (.01) quarter (.01) shares (.01) price", |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>and Brown clustering directly optimize likelihood.</td></tr><tr><td>In contrast, the anchor algorithm is based on the</td></tr><tr><td>method of moments and does not (at least directly)</td></tr><tr><td>optimize likelihood. Note that high likelihood does</td></tr><tr><td>not imply high accuracy under HMMs.</td></tr><tr><td>6 Related Work</td></tr><tr><td>6.1 Latent-Variable Models</td></tr></table>", |
| "text": "Most likely words under each anchor word (English model ANCHOR-FEATURES). Emission probabilities o(x|h) are given in parentheses.", |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>Model</td><td colspan=\"2\">Normalized LL Acc</td></tr><tr><td>BW</td><td>-6.45</td><td>59.8</td></tr><tr><td>CLUSTER</td><td>-6.71</td><td>62.9</td></tr><tr><td>ANCHOR</td><td>-7.06</td><td>66.1</td></tr><tr><td>ANCHOR-FEATURES</td><td>-7.05</td><td>71.4</td></tr></table>", |
| "text": "Verifying model assumptions on the universal treebank. The anchor assumption is satisfied in every language. The Brown assumption (each word has exactly one possible tag) is violated but not by a large margin. The lower table shows the most frequent anchor word and its count under each tag on the English portion.", |
| "type_str": "table" |
| } |
| } |
| } |
| } |