ACL-OCL / Base_JSON /prefixS /json /sigmorphon /2021.sigmorphon-1.19.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:30:34.402874Z"
},
"title": "Simple induction of (deterministic) probabilistic finite-state automata for phonotactics by stochastic gradient descent",
"authors": [
{
"first": "Huteng",
"middle": [],
"last": "Dai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rutgers University",
"location": {}
},
"email": "huteng.dai@rutgers.edu"
},
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Irvine"
}
},
"email": "rfutrell@uci.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce a simple and highly general phonotactic learner which induces a probabilistic finite-state automaton from word-form data. We describe the learner and show how to parameterize it to induce unrestricted regular languages, as well as how to restrict it to certain subregular classes such as Strictly k-Local and Strictly k-Piecewise languages. We evaluate the learner on its ability to learn phonotactic constraints in toy examples and in datasets of Quechua and Navajo. We find that an unrestricted learner is the most accurate overall when modeling attested forms not seen in training; however, only the learner restricted to the Strictly Piecewise language class successfully captures certain nonlocal phonotactic constraints. Our learner serves as a baseline for more sophisticated methods.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce a simple and highly general phonotactic learner which induces a probabilistic finite-state automaton from word-form data. We describe the learner and show how to parameterize it to induce unrestricted regular languages, as well as how to restrict it to certain subregular classes such as Strictly k-Local and Strictly k-Piecewise languages. We evaluate the learner on its ability to learn phonotactic constraints in toy examples and in datasets of Quechua and Navajo. We find that an unrestricted learner is the most accurate overall when modeling attested forms not seen in training; however, only the learner restricted to the Strictly Piecewise language class successfully captures certain nonlocal phonotactic constraints. Our learner serves as a baseline for more sophisticated methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural language phonotactics is argued to fall in the class of regular languages, or even in a smaller class of subregular languages . This observation has motivated the study of probabilistic finite-state automata (PFAs) that generate these languages as models of phonotactics. Here we implement a simple method for the induction of PFAs for phonotactics from data, which can induce general regular languages in addition to languages in certain more restricted subclasses, for example, Strictly k-Local and Strictly k-Piecewise languages (Heinz, 2018; Heinz and Rogers, 2010) . We evaluate our learner on corpus data from Quechua and Navajo, with a particular emphasis on the ability to learn nonlocal constraints.",
"cite_spans": [
{
"start": 540,
"end": 553,
"text": "(Heinz, 2018;",
"ref_id": "BIBREF13"
},
{
"start": 554,
"end": 577,
"text": "Heinz and Rogers, 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We make both theoretical and empirical contributions. Theoretically, we present the differentiable linear-algebraic formulation of PFAs which enables learning of the structure of the automaton by gradient descent. In our framework, it is possible to induce an unrestricted automaton with a given number of states, or an automaton with hard-coded constraints representing various subregular languages. This work fills a gap in the formal linguistics literature, where learners have been developed within certain subregular classes (Shibata and Heinz, 2019; Heinz, 2010; Heinz and Rogers, 2010; Futrell et al., 2017) , whereas our learner can in principle induce any (sub)regular language. In addition, we demonstrate how Strictly Local and Strictly Piecewise constraints can be encoded within our framework, and show how informationtheoretic regularization can be applied to produce deterministic automata.",
"cite_spans": [
{
"start": 530,
"end": 555,
"text": "(Shibata and Heinz, 2019;",
"ref_id": "BIBREF25"
},
{
"start": 556,
"end": 568,
"text": "Heinz, 2010;",
"ref_id": "BIBREF12"
},
{
"start": 569,
"end": 592,
"text": "Heinz and Rogers, 2010;",
"ref_id": "BIBREF15"
},
{
"start": 593,
"end": 614,
"text": "Futrell et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Empirically, our main result is to show that our approach gives reasonable and linguistically accurate results. We find that inducing an unrestricted PFA produces the best fit to held-out attested forms, while inducing an automaton for a Strictly 2-Piecewise language yields a model that successfully captures nonlocal constraints. We also analyze the nondeterminism of induced automata, and the extent to which induced automata overfit to their training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A probabilistic finite-state automaton (PFA) for generating sequences consists of a finite set of states Q, an inventory of symbols \u03a3, an emission distribution with probability mass function p(x|q) which gives the probability of generating a symbol x \u2208 \u03a3 given state q \u2208 Q, and a transition distribution with probability mass function p(q |q, x) which gives the probability of transitioning into new state q from state q after emission of symbol x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model specification 2.1 Probabilistic Finite-state Automata",
"sec_num": "2"
},
{
"text": "We parameterize a PFA using a family of rightstochastic matrices. The emission matrix E, of shape |Q| \u00d7 |\u03a3|, gives the probability of emitting a symbol x given a state. Each row in the matrix represents a state, and each column represents an output symbol. Given a distribution on states represented as a stochastic vector q, the probability mass function over symbols is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model specification 2.1 Probabilistic Finite-state Automata",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(\u2022|q) = q E.",
"eq_num": "(1)"
}
],
"section": "Model specification 2.1 Probabilistic Finite-state Automata",
"sec_num": "2"
},
{
"text": "Each symbol x \u2208 \u03a3 is associated with a rightstochastic transition matrix T x of shape |Q|\u00d7|Q|, so that the probability distribution on following states given that the symbol x was emitted from the distribution on states q is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model specification 2.1 Probabilistic Finite-state Automata",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(\u2022|q, x) = q T x .",
"eq_num": "(2)"
}
],
"section": "Model specification 2.1 Probabilistic Finite-state Automata",
"sec_num": "2"
},
{
"text": "Generation of a particular sequence x \u2208 \u03a3 * works by starting in a distinguished initial state q 0 , generating a symbol x, transitioning into the next state q , and so on recursively until reaching a distinguished final state q f . Given a PFA parameterized by matrices E and T, the probability of a sequence x N t=1 marginalizing over all trajectories through states can be calculated according to the Forward algorithm (Baum et al., 1970; Vidal et al., 2005a, \u00a73) as follows:",
"cite_spans": [
{
"start": 422,
"end": 441,
"text": "(Baum et al., 1970;",
"ref_id": "BIBREF1"
},
{
"start": 442,
"end": 466,
"text": "Vidal et al., 2005a, \u00a73)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model specification 2.1 Probabilistic Finite-state Automata",
"sec_num": "2"
},
{
"text": "p(x N t=1 |E, T) = f (x N t=1 |\u03b4 q 0 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model specification 2.1 Probabilistic Finite-state Automata",
"sec_num": "2"
},
{
"text": "where \u03b4 q is a one-hot coordinate vector on state q and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model specification 2.1 Probabilistic Finite-state Automata",
"sec_num": "2"
},
{
"text": "f (\u2205|q) = \u03b4 q f q f (x n t=1 |q) = p(x 1 |q) \u2022 f (x n t=2 |q T x 1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model specification 2.1 Probabilistic Finite-state Automata",
"sec_num": "2"
},
{
"text": "The important aspect of this formulation is that the probability of a sequence is a differentiable function of the matrices E and T that define the PFA. Because the probability function is differentiable, we can induce a PFA from a set of training sequences by using gradient descent to search for matrices that maximize the probability of the training sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model specification 2.1 Probabilistic Finite-state Automata",
"sec_num": "2"
},
{
"text": "We describe a simple and highly general method for inducing a PFA from data by stochastic gradient descent. Although more specialized learning algorithms and heuristics exist for special cases (see for example Vidal et al., 2005b, \u00a73) , ours has the advantage of generality. Our goal is to see how effective this simple approach can be in practice.",
"cite_spans": [
{
"start": 210,
"end": 234,
"text": "Vidal et al., 2005b, \u00a73)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning by gradient descent",
"sec_num": "2.2"
},
{
"text": "Given a data distribution X with support over \u03a3 * , we wish to learn a PFA by finding parameter matrices E and T to minimize an objective function of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning by gradient descent",
"sec_num": "2.2"
},
{
"text": "J(E, T) = \u2212 log p(x|E, T) x\u223cX + C(E, T),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning by gradient descent",
"sec_num": "2.2"
},
{
"text": "(3) where \u2022 x\u223cX indicates an average over values x drawn from the data distribution X, and \u2212 log p(x|E, T) is the negative log likelihood (NLL) of a sample x under the model; the average negative log likelihood is equivalent to the cross entropy of the data distribution X and the model. By minimizing cross-entropy, we maximize likelihood and thus fit to the data. The term C(E, T) represents additional complexity constraints on the E and T matrices, discussed in Section 2.4. When C is interpreted as a log prior probability on automata, then minimizing Eq. 3 is equivalent to fitting the model by maximum a posteriori.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning by gradient descent",
"sec_num": "2.2"
},
{
"text": "Given the formulation in Eq. 3, because the objective function is differentiable, we can search for the optimal matrices E and T by performing (stochastic) descent on the gradients of the objective. That is, for a parameter matrix X, we can search for a minimum by performing updates of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning by gradient descent",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X = X \u2212 \u03b7\u2207J(X),",
"eq_num": "(4)"
}
],
"section": "Learning by gradient descent",
"sec_num": "2.2"
},
{
"text": "where the scalar \u03b7 is the learning rate. In stochastic gradient descent, each update is performed using a random finite sample from the data distribution, called a minibatch, to approximate the average over the data distribution in Eq. 3. However, we cannot apply these updates directly to the matrices E and T because they must be right-stochastic, meaning that the entries in each row must be positive and sum to 1. There is no guarantee that the output of Eq. 4 would satisfy these constraints. This issue was addressed by Dai (2021) by clipping the values of the matrix E to be between 0 and 1. A more general solution is that, instead of doing optimization on the E and T matrices directly, we instead do optimization over underlying real-valued matrices\u1ebc andT such that",
"cite_spans": [
{
"start": 526,
"end": 536,
"text": "Dai (2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning by gradient descent",
"sec_num": "2.2"
},
{
"text": "E ij = exp\u1ebc ij k exp\u1ebc ik , T ij = expT ij k expT ik ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning by gradient descent",
"sec_num": "2.2"
},
{
"text": "in other words we derive the matrices E and T by applying the softmax function to underlying matrices\u1ebc andT, whose entries are called logits. Gradient descent is then done on the objective as a function of the logit matrices\u1ebc andT. This approach to parameterizing probability distributions is standard in machine learning. Applied to induce a PFA with states Q and symbol inventory \u03a3, our formulation yields a total of |Q| \u00d7 (|Q| \u00d7 |\u03a3| \u2212 1) meaningful trainable parameters. We note that this procedure is not guaranteed to find an automaton that globally minimizes the objective when optimizing T (see Vidal et al., 2005b, \u00a73) . But in practice, stochastic gradient descent in high-dimensional spaces can avoid local minima, functioning as a kind of annealing (Bottou, 1991, \u00a74) ; using these simple optimization techniques on non-convex objectives is now standard practice in machine learning.",
"cite_spans": [
{
"start": 602,
"end": 626,
"text": "Vidal et al., 2005b, \u00a73)",
"ref_id": null
},
{
"start": 760,
"end": 778,
"text": "(Bottou, 1991, \u00a74)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning by gradient descent",
"sec_num": "2.2"
},
{
"text": "In order to model phonotactics, a PFA must be sensitive to the boundaries of words, because there are often constraints that apply only at word beginnings or endings (Hayes and Wilson, 2008; Chomsky and Halle, 1968) . In order to account for this, we include in the symbol inventory \u03a3 a special word boundary delimiter #, which occurs as the final symbol of each word, and which only occurs in that position. Furthermore, we constrain all matrices T to transition deterministically back into the initial state following the symbol #, effectively reusing the initial state q 0 as the final state q f . By constructing the automata in this way, we ensure that their long-run behavior is well-behaved. If an automaton of this form is allowed to keep generating past the symbol #, it will generate successive concatenated independent and identically distributed samples from its distribution over words, with boundary symbols # delineating them. This construction makes it possible to calculate stationary distributions over states and complexity measures related to them.",
"cite_spans": [
{
"start": 166,
"end": 190,
"text": "(Hayes and Wilson, 2008;",
"ref_id": "BIBREF10"
},
{
"start": 191,
"end": 215,
"text": "Chomsky and Halle, 1968)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence representation and word boundaries",
"sec_num": "2.3"
},
{
"text": "The objective in Eq. 3 includes a regularization term C representing complexity constraints. Any differentiable complexity measure could be used here. This regularization term can be viewed from a Bayesian perspective as defining a prior over automata, and providing an inductive bias. We propose to use this term to constrain the PFA induction process to produce deterministic automata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "Most formal work on probabilistic finite-state automata for phonology has focused on determin-istic PFAs because of their nice theoretical properties (Heinz, 2010) . A deterministic PFA is distinguished by having fully deterministic transition matrices T. This condition can be expressed information-theoretically. Assuming 0 log 0 = 0, letting the entropy of a stochastic vector p be:",
"cite_spans": [
{
"start": 150,
"end": 163,
"text": "(Heinz, 2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "H[p] = \u2212 i p i log p i ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "a PFA is deterministic when it satisfies the condition H[q T x ] = 0 for all symbols x and state distributions q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "We can use this expression to monitor the degree of nondeterminism of a PFA during optimization, or to add a determinism constraint to the objective in Section 2.2. The average nondeterminism N of a PFA is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "N (E, T) = ijq i E ij H[\u03b4 q i T j ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "whereq is the stationary distribution over states, representing the long-run average occupancy distribution over states. The stationary distributionq is calculated by finding the left eigenvector of the matrix S satisfyingq",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "S =q,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "where S is a right stochastic matrix giving the probability that a PFA transitions from state i to state j marginalizing over symbols emitted:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "S ij = x\u2208\u03a3 p(x|q i )p(q j |q i , x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "For the Strictly Local and Strictly Piecewise automata, N = 0 by construction. For an automaton parameterized by T = softmax(T), it is not possible to attain N = 0, but nonetheless N can be made arbitrarily small. There are alternative parameterizations where N = 0 is achievable, for example using the sparsemax function instead of softmax (Martins and Astudillo, 2016; Peters et al., 2019) .",
"cite_spans": [
{
"start": 341,
"end": 370,
"text": "(Martins and Astudillo, 2016;",
"ref_id": "BIBREF20"
},
{
"start": 371,
"end": 391,
"text": "Peters et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "In order to constrain automata to be deterministic, we set the regularization term in Eq. 3 to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "C = \u03b1N,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "where \u03b1 is a non-negative scalar determining the strength of the trade-off of cross entropy and nondeterminism in the optimization. With \u03b1 = 0 there is no constraint on the nondeterminism of the automaton, and minimizing the objective in Eq. 3 reduces to maximum likelihood estimation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.4"
},
{
"text": "We define Strictly Local and Strictly Piecewise automata as automata that generate the respective languages. We implement Strictly Local and Strictly Piecewise automata by hard-coding the transition matrices T. For these automata, we only do optimization over the emission matrices E.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementing restricted automata",
"sec_num": "2.5"
},
{
"text": "Strictly Local In a Strictly k-Local (k-SL) language, each symbol is conditioned only on immediately preceding k \u2212 1 symbol(s) (Heinz, 2018; Rogers and Pullum, 2011) . We implement a 2-SL automaton by associating each state q \u2208 Q with a unique element x in the symbol inventory \u03a3. Upon emitting symbol x, the automaton deterministically transitions into the corresponding state, denoted q x . Thus the transition matrices have the form",
"cite_spans": [
{
"start": 127,
"end": 140,
"text": "(Heinz, 2018;",
"ref_id": "BIBREF13"
},
{
"start": 141,
"end": 165,
"text": "Rogers and Pullum, 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementing restricted automata",
"sec_num": "2.5"
},
{
"text": "T x = \uf8ee \uf8ef \uf8ef \uf8f0 ...q =x ... qx ...q =x ... . . . . . . . . . . . . 0 . . . 1 . . . 0 . . . . . . . . . . . . \uf8f9 \uf8fa \uf8fa \uf8fb .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementing restricted automata",
"sec_num": "2.5"
},
{
"text": "This construction can be straightforwardly extended to k-SL, yielding |\u03a3| k\u22121 \u00d7 (|\u03a3| \u2212 1) trainable parameters for a k-SL automaton.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementing restricted automata",
"sec_num": "2.5"
},
{
"text": "Strictly Piecewise A Strictly k-Piecewise k-SP) language, each symbol depends on the presence of any preceding k \u2212 1 symbols at arbitrary distance (Heinz, 2007 (Heinz, , 2018 Shibata and Heinz, 2019) . For example, in a 2-SP language, in a string abc, c would be conditional on the presence of a and the presence of b, without regard to distance nor the relative order of a and b. The implementation of an SP automaton is slightly more complex than the SL automaton, as the number of states required in a na\u00efve implementation is exponential in the symbol inventory size, resulting in intractably large matrices. We circumvent this complexity by parameterizing a 2-SP automaton as a product of simpler automata. We associate each symbol x \u2208 \u03a3 with a sub-automaton A x which has two states q x 0 and q x 1 , with state q x 0 indicating that the symbol x has not been seen, and q x 1 indicating that it has been seen. Each subautomaton A x has an emission matrix E (x) of size 2 \u00d7 |\u03a3| corresponding to the two states q x 0 and q x 1 ; the emission matrix for all states q x 0 is constrained to be the uniform distribution over symbols. The transition matrices T (x) are",
"cite_spans": [
{
"start": 147,
"end": 159,
"text": "(Heinz, 2007",
"ref_id": "BIBREF11"
},
{
"start": 160,
"end": 174,
"text": "(Heinz, , 2018",
"ref_id": "BIBREF13"
},
{
"start": 175,
"end": 199,
"text": "Shibata and Heinz, 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementing restricted automata",
"sec_num": "2.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T (x) x = 0 1 0 1 , T",
"eq_num": "(x)"
}
],
"section": "Implementing restricted automata",
"sec_num": "2.5"
},
{
"text": "y =x = 1 0 0 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementing restricted automata",
"sec_num": "2.5"
},
{
"text": "Then the probability of the t'th symbol in a sequence x t given a context of previous symbols x t\u22121 i=1 is the geometric mixture of the probability of x t under each sub-automaton, also called the co-emission probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementing restricted automata",
"sec_num": "2.5"
},
{
"text": "p(x t |x t\u22121 i=1 ) \u221d |\u03a3| y=1 p Ay (x t |x t\u22121 i=1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementing restricted automata",
"sec_num": "2.5"
},
{
"text": "Because each sub-automaton A y is deterministic, its state after seeing the context x t\u22121 i=1 is known, and the conditional probability p Ay (x t |x t\u22121 i=1 ) can be computed using Eq. 1. For calculating the probability of a sequence, we assume an initial state of having seen the boundary symbol #; that is, the sub-automaton A # starts in state q # 1 . Using this parameterization, we can do optimization over the collection of emission matrices {E (x) } x\u2208\u03a3 . This construction yields |\u03a3| \u00d7 (|\u03a3| \u2212 1) trainable parameters for the 2-SP automaton, the same number of parameters as the 2-SL automaton.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementing restricted automata",
"sec_num": "2.5"
},
{
"text": "SP + SL It is also possible to create and train an automaton with the ability to condition on both 2-SL and 2-SP factors by taking the product of 2-SL and 2-SP automata, as proposed by . We refer to the language generated by such an automaton as 2-SL + 2-SP. We experiment with such product machines below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementing restricted automata",
"sec_num": "2.5"
},
{
"text": "PFA induction from data is a well-studied task which has been the subject of multiple competitions over the years (see Verwer et al., 2012 , for a review). The most common approaches are variants of Baum-Welch and heuristic state-merging algorithms (see for example de la Higuera, 2010). Gibbs samplers and spectral methods have also been proposed (Gao and Johnson, 2008; Bailly, 2011; Shibata and Yoshinaka, 2012) . Induction of restricted PDFAs, especially for SL and SP languages, is explored in Rogers (2013, 2010) Our work differs from previous approaches in its simplicity. Inspired by Shibata and Heinz (2019) , we optimize the training objective directly via gradient descent, without approximations or heuristics other than the use of minibatches. The same algorithm is applied to learn both transition and emission structure, for learning of both general PFAs and restricted PDFAs. One of our contributions is to show that this very simple approach gives reasonable results for learning phonotactics.",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "Verwer et al., 2012",
"ref_id": "BIBREF27"
},
{
"start": 348,
"end": 371,
"text": "(Gao and Johnson, 2008;",
"ref_id": "BIBREF8"
},
{
"start": 372,
"end": 385,
"text": "Bailly, 2011;",
"ref_id": "BIBREF0"
},
{
"start": 386,
"end": 414,
"text": "Shibata and Yoshinaka, 2012)",
"ref_id": "BIBREF26"
},
{
"start": 499,
"end": 518,
"text": "Rogers (2013, 2010)",
"ref_id": null
},
{
"start": 592,
"end": 616,
"text": "Shibata and Heinz (2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2.6"
},
{
"text": "First, we test the ability of the model to recover automata for simple examples of subregular languages. We do so for the two subregular classes 2-SL and 2-SP described in Section 2.5. For each of these language classes, we implement a reference PFA which generates strings from a simple example language in that class, then generate 10, 000 sample sequences from the reference PFA. We then use these samples as training data, and study whether our learners can recover the relevant constraints from the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing toy languages",
"sec_num": "3"
},
{
"text": "We evaluate the ability to induce appropriate automata in two ways. First, since we are studying very simple languages and automata, it is possible to directly inspect the E and T matrices and check that they implement the correct automaton by observing the transition and emission probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.1"
},
{
"text": "Second, we study the probabilities assigned to carefully selected strings which exemplify the constraints that define the languages. For each language, we define an illegal test string which violates the constraints of the language, and a minimally-different legal test string. Given an automaton, we can measure the legal-illegal difference: the log probability of the legal test string minus the log probability of the illegal test string. A larger legal-illegal difference indicates that the model is assigning a higher probability to the legal form compared to the illegal one and therefore is successfully learning the constraints represented by the testing data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.1"
},
{
"text": "All languages are defined over the symbol inventory {a, b, c} plus the boundary symbol #.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "3.2"
},
{
"text": "As an exemplar of 2-SL languages, we use the language characterized by the forbidden factor *ab. A deterministic PFA for the language is given in Figure 1 (top) . The language contains all strings that do not have an a followed immediately by a b. Our legal test string for this language is bacccb# and the illegal test string is babccc#.",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 160,
"text": "Figure 1 (top)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Languages",
"sec_num": "3.2"
},
{
"text": "As an exemplar of 2-SP languages, we use the language characterized by a forbidden factor *a. . . b. This language contains all strings that do not have an a followed by a b at any distance. The reference automaton is given in Figure 1 (bottom) . The legal test string is baccca# and the illegal test string is bacccb#.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 244,
"text": "Figure 1 (bottom)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Languages",
"sec_num": "3.2"
},
{
"text": "The logit matrices\u1ebc andT are initialized with random draws from a standard Normal distribution (Derrida, 1981) . We perform stochastic gradient descent using the Adam algorithm, which adaptively sets the learning rate (Kingma and Ba, 2015). We perform 10, 000 update steps with starting learning rate \u03b7 = 0.001 and minibatch size 5.",
"cite_spans": [
{
"start": 95,
"end": 110,
"text": "(Derrida, 1981)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training parameters",
"sec_num": "3.3"
},
{
"text": "Unrestricted PFA induction succeeds in recovering the reference automata for both toy languages. Learners restricted to the appropriate classes, as well as the automaton combining SL and SP factors, also succeed in inducing the appropriate automata, while learners restricted to the 'wrong' class fail. Figure 1 shows the legal-illegal differences for test strings over the course of training. We can see that, when the learner is unrestricted or when the learner is in the appropriate class, it eventually picks up on the relevant constraint, with the legal-illegal difference increasing apparently without bound over training. Unrestricted learners take longer to reach this point, but they reach it reliably. On the other hand, looking at the legal-illegal differences for learners in the wrong class, we see that they asymptote to a small number and stop improving.",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 311,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "These results demonstrate that our simple method for PFA induction does succeed in inducing certain simple structures relevant for modeling phonotactics in a small, controlled setting. Next, we turn to induction of phonotactics from corpus data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "We evaluate our learner by training it on dictionary forms from Quechua and Navajo and then studying its ability to predict attested forms that were held out in training in addition to artificially constructed nonce forms which probe the ability of the model to represent nonlocal constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus experiments",
"sec_num": "4"
},
{
"text": "All training parameters are as in Section 3.3, except that we train for 100, 000 steps, and control the succession of minibatches to be the same across models within the same language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training parameters",
"sec_num": "4.1"
},
{
"text": "The proposed learner is applied to the datasets of Navajo and Quechua (Gouskova and Gallagher, 2020) , in which nonlocal phonotactics are attested.",
"cite_spans": [
{
"start": 70,
"end": 100,
"text": "(Gouskova and Gallagher, 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.2"
},
{
"text": "In Navajo, the co-occurrence of alveolar and palatal strident is illegal. The learning data of Navajo includes 6, 279 Navajo phonological words; we divide this data into a training set of 5, 023 forms and a held-out set of 1, 256 forms. The nonce testing data of Navajo consists of 5, 000 generated nonce words, which were labelled as illegal (N = 3, 271) and legal (N = 1, 729) based on whether the nonlocal phonotactics are satisfied. In Quechua, any stop cannot be followed by an ejective or aspirated stop at any distance. The learning data of Quechua includes 10, 804 phonological words, which we separate into 8, 643 training forms and 2, 160 held-out forms. The testing data of Quechua (Gouskova and Gallagher, 2020) consists of 24, 352 nonce forms which were manually classified as legal (N = 18, 502) and illegal (N = 5, 810, including stop-aspirate and stopejective pairs).",
"cite_spans": [
{
"start": 693,
"end": 723,
"text": "(Gouskova and Gallagher, 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.2"
},
{
"text": "For the linguistic performance of the classifier, we study two main dependent variables. First, the average held-out negative log likelihood (NLL) indicates the ability of the model to assign high probabilities to unseen but attested forms-low NLL indicates higher probabilities. Second, using our nonce forms dataset, we measure the extent to which the model can differentiate the legal forms from the illegal forms using the difference in log likelihood for the legal forms minus the illegal forms. This is the same as the legal-illegal Figure 4: Performance of a 2-SP automaton, a 2-SL automaton, a 2-SP + 2-SL product automaton, and an unrestricted PFA with 1, 024 states and \u03b1 = 0. 'Heldout NLL' is the average NLL of a form in the set of attested forms never seen during training. 'Legal-illegal difference' is the difference in log likelihood between 'legal' and 'illegal' forms in the nonce test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependent Variables",
"sec_num": "4.3"
},
{
"text": "difference described in Section 3.1, but now as an average over many legal-illegal nonce pairs instead of a difference for one pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependent Variables",
"sec_num": "4.3"
},
{
"text": "Unrestricted PFA induction Figure 3 shows results from induction of unrestricted PFAs with various numbers of states. We find that show the average NLL of forms in the heldout data, as well as 'overfitting', defined as the average held-out NLL minus the average training set NLL. This number shows the extent to which the model assigns higher probabilities to forms in the training set as opposed to the held-out set, an index of overfitting. We find that automata with more states fit the data better, but are also more prone to overfitting to the training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "In Figure 3 (bottom two rows) we also show the measured nondeterminism N of the induced automata throughout training, for different values of the regularization parameter \u03b1 (see Section 2.4). We find that, even without an explicit constraint for determinism, the induced PFAs tend towards determinism over time, with N reaching around 1.5 bits by the final training step. Explicit regularization (with \u03b1 = 1) makes this process faster, with N reaching around 0.5 bits. Regularization for determinism has only a minimal effect on the NLL values. Figure 4 shows held-out NLL and the legal-illegal difference for both languages, comparing the SL automaton, the SP automaton, the product SP + SL automaton, and a PFA with 1, 024 states and \u03b1 = 0.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 545,
"end": 553,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "In terms of the ability to predict attested heldout forms, the best model is consistently the unrestricted PFA, with the SP automaton performing the worst. However, in terms of predicting the illformedness of artificial forms violating nonlocal phonotactic constraints, the best model is either the SP automaton or the SP + SL product automaton. Both of these automata successfully induce the nonlocal constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic performance and restricted models",
"sec_num": null
},
{
"text": "On the other hand, the unrestricted PFA learner shows no evidence at all of having learned the difference between legal and illegal forms in the artificial data, despite having the capacity to do so in theory, and despite succeeding in inducing a 2-SP language in Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic performance and restricted models",
"sec_num": null
},
{
"text": "We find that an unrestricted PFA learner performs most accurately when predicting real held-out forms, while an SP learner is most effective in learning certain nonlocal constraints. In fact, in terms of its ability to model the nonlocal constraints, the PFA learner ends up comparable to an SL learner, which cannot learn the constraints at all. Meanwhile, the SP learner, which is unable to model local constraints, fares much worse than even the SL learner on predicting held-out forms. The product SP + SL learner combines the strengths of both restricted learners, but still does not assign as high probability to the real held-out forms as the unrestricted PFA learner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "This pattern of performance suggests that the PFA learner is using most of its states to model local constraints beyond those captured in a 2-SL language. These constraints are important for predicting real held-out forms. The SP automaton is unable to achieve strong performance on heldout forms without the ability to model these local constraints. On the other hand, the unrestricted PFA tends to overfit to its training data, perhaps explaining its failure to detect nonlocal constraints which are picked up by the appropriate restricted automata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "We introduced a framework for phonotactic learning based on simple induction of probabilistic finitestate automata by stochastic gradient descent. We showed how this framework can be used to learn unrestricted PFAs, in addition to PFAs restricted to certain formal language classes such as Strictly Local and Strictly Piecewise, via constraints on the transition matrices that define the automata. Furthermore, we showed that the framework is successful in learning some phonotactic phenomena, with unrestricted automata performing best in a wide-coverage evaluation on attested but held-out forms, and Strictly Piecewise automata performing best in a targeted evaluation using nonce forms focusing on nonlocal constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our results leave open the question of whether the unrestricted learner or one of the restricted learners is 'best' for learning phonotactics, since they perform differently on different metrics. A key question for future work is whether there might be some model that could do well in inducing both local and nonlocal constraints simultaneously, and performing well on both the held-out evaluation and the nonce form evaluation. Such a model could come in the form of another restricted language class such as Tier-Based Strictly Local languages (Heinz et al., 2011; Jardine and Heinz, 2016; Mc-Mullin, 2016; Jardine and McMullin, 2017) , or perhaps in the form of a regularization term in the training objective which enforces an inductive bias that favors certain nonlocal interactions.",
"cite_spans": [
{
"start": 547,
"end": 567,
"text": "(Heinz et al., 2011;",
"ref_id": "BIBREF14"
},
{
"start": 568,
"end": 592,
"text": "Jardine and Heinz, 2016;",
"ref_id": "BIBREF17"
},
{
"start": 593,
"end": 609,
"text": "Mc-Mullin, 2016;",
"ref_id": null
},
{
"start": 610,
"end": 637,
"text": "Jardine and McMullin, 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The code for this project is available at http://github.com/hutengdai/ PFA-learner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This work was supported by a GPU Grant from the NVIDIA corporation. We thank the three anonymous reviewers and Adam Jardine, Jeff Heinz, and Dakotah Lambert for their comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Quadratic weighted automata: Spectral algorithm and likelihood maximization",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Bailly",
"suffix": ""
}
],
"year": 2011,
"venue": "Asian Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "147--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Bailly. 2011. Quadratic weighted automata: Spectral algorithm and likelihood maximization. In Asian Conference on Machine Learning, pages 147- 163. PMLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains",
"authors": [
{
"first": "Leonard",
"middle": [
"E"
],
"last": "Baum",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Petrie",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Soules",
"suffix": ""
},
{
"first": "Normal",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 1970,
"venue": "Annals of Mathematical Statistics",
"volume": "41",
"issue": "",
"pages": "164--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonard E. Baum, Ted Petrie, George Soules, and Nor- mal Weiss. 1970. A maximization technique occur- ring in the statistical analysis of probabilistic func- tions of Markov chains. Annals of Mathematical Statistics, 41:164-171.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Stochastic gradient learning in neural networks",
"authors": [
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of Neuro-N\u0131mes",
"volume": "91",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L\u00e9on Bottou. 1991. Stochastic gradient learning in neural networks. Proceedings of Neuro-N\u0131mes, 91(8):12.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Sound Pattern of English",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
},
{
"first": "Morris",
"middle": [],
"last": "Halle",
"suffix": ""
}
],
"year": 1968,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky and Morris Halle. 1968. The Sound Pattern of English. Harper & Row.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning nonlocal phonotactics in Strictly Piecewise phonotactic model",
"authors": [
{
"first": "Huteng",
"middle": [],
"last": "Dai",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2020 Annual Meeting on Phonology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huteng Dai. 2021. Learning nonlocal phonotactics in Strictly Piecewise phonotactic model. In Proceed- ings of the 2020 Annual Meeting on Phonology.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Grammatical Inference: Learning Automata and Grammars",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "La",
"middle": [],
"last": "Higuera",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin de la Higuera. 2010. Grammatical Inference: Learning Automata and Grammars. Cambridge Uni- versity Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Random-energy model: An exactly solvable model of disordered systems",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Derrida",
"suffix": ""
}
],
"year": 1981,
"venue": "Physical Review B",
"volume": "24",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Derrida. 1981. Random-energy model: An ex- actly solvable model of disordered systems. Physi- cal Review B, 24(5):2613.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A generative model of phonotactics",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Albright",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"J"
],
"last": "O'donnell",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "73--86",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00047"
]
},
"num": null,
"urls": [],
"raw_text": "Richard Futrell, Adam Albright, Peter Graff, and Tim- othy J. O'Donnell. 2017. A generative model of phonotactics. Transactions of the Association for Computational Linguistics, 5:73-86.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A comparison of Bayesian estimators for unsupervised Hidden Markov Model POS taggers",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "344--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfeng Gao and Mark Johnson. 2008. A compar- ison of Bayesian estimators for unsupervised Hid- den Markov Model POS taggers. In Proceedings of the 2008 Conference on Empirical Methods in Natu- ral Language Processing, pages 344-352, Honolulu, Hawaii. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Inducing nonlocal constraints from baseline phonotactics. Natural Language & Linguistic Theory",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Gouskova",
"suffix": ""
},
{
"first": "Gillian",
"middle": [],
"last": "Gallagher",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "38",
"issue": "",
"pages": "77--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Gouskova and Gillian Gallagher. 2020. Induc- ing nonlocal constraints from baseline phonotactics. Natural Language & Linguistic Theory, 38(1):77- 116.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A maximum entropy model of phonotactics and phonotactic learning",
"authors": [
{
"first": "Bruce",
"middle": [],
"last": "Hayes",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2008,
"venue": "Linguistic Inquiry",
"volume": "39",
"issue": "3",
"pages": "379--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruce Hayes and Colin Wilson. 2008. A maximum en- tropy model of phonotactics and phonotactic learn- ing. Linguistic Inquiry, 39(3):379-440.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The inductive learning of phonotactic patterns",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Heinz. 2007. The inductive learning of phono- tactic patterns. Ph.D. thesis, PhD dissertation, Uni- versity of California, Los Angeles.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning long-distance phonotactics",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2010,
"venue": "Linguistic Inquiry",
"volume": "41",
"issue": "4",
"pages": "623--661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Heinz. 2010. Learning long-distance phonotac- tics. Linguistic Inquiry, 41(4):623-661.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The computational nature of phonological generalizations. Phonological Typology, Phonetics and Phonology",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "126--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Heinz. 2018. The computational nature of phonological generalizations. Phonological Typol- ogy, Phonetics and Phonology, pages 126-195.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Tier-based Strictly Local constraints for phonology",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
},
{
"first": "Chetan",
"middle": [],
"last": "Rawal",
"suffix": ""
},
{
"first": "Herbert",
"middle": [
"G"
],
"last": "Tanner",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "58--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Heinz, Chetan Rawal, and Herbert G. Tan- ner. 2011. Tier-based Strictly Local constraints for phonology. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies, pages 58-64, Port- land, Oregon, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Estimating Strictly Piecewise distributions",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Rogers",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "886--896",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Heinz and James Rogers. 2010. Estimating Strictly Piecewise distributions. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 886-896, Upp- sala, Sweden. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning subregular classes of languages with factored deterministic automata",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Rogers",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 13th Meeting on the Mathematics of Language (MoL 13)",
"volume": "",
"issue": "",
"pages": "64--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Heinz and James Rogers. 2013. Learning sub- regular classes of languages with factored determin- istic automata. In Proceedings of the 13th Meeting on the Mathematics of Language (MoL 13), pages 64-71, Sofia, Bulgaria. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning Tierbased Strictly 2-Local languages",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Jardine",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "87--98",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00085"
]
},
"num": null,
"urls": [],
"raw_text": "Adam Jardine and Jeffrey Heinz. 2016. Learning Tier- based Strictly 2-Local languages. Transactions of the Association for Computational Linguistics, 4:87- 98.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Efficient learning of Tier-based Strictly k-Local languages",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Jardine",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Mcmullin",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Language and Automata Theory and Applications",
"volume": "",
"issue": "",
"pages": "64--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Jardine and Kevin McMullin. 2017. Effi- cient learning of Tier-based Strictly k-Local lan- guages. In International Conference on Language and Automata Theory and Applications, pages 64- 76. Springer.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, Conference Track Proceedings, San Diego, CA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "From softmax to sparsemax: A sparse model of attention and multi-label classification",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Ramon",
"middle": [],
"last": "Astudillo",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1614--1623",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andre Martins and Ramon Astudillo. 2016. From softmax to sparsemax: A sparse model of atten- tion and multi-label classification. In International Conference on Machine Learning, pages 1614-1623. PMLR.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Tier-based locality in long-distance phonotactics: learnability and typology",
"authors": [
{
"first": "Kevin",
"middle": [
"James"
],
"last": "Mcmullin",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin James McMullin. 2016. Tier-based locality in long-distance phonotactics: learnability and typol- ogy. Ph.D. thesis, University of British Columbia.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Sparse sequence-to-sequence models",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Vlad",
"middle": [],
"last": "Niculae",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1504--1519",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1146"
]
},
"num": null,
"urls": [],
"raw_text": "Ben Peters, Vlad Niculae, and Andr\u00e9 F. T. Martins. 2019. Sparse sequence-to-sequence models. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1504- 1519, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Cognitive and sub-regular complexity",
"authors": [
{
"first": "James",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Fero",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Hurst",
"suffix": ""
},
{
"first": "Dakotah",
"middle": [],
"last": "Lambert",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Wibel",
"suffix": ""
}
],
"year": 2013,
"venue": "Formal Grammar",
"volume": "",
"issue": "",
"pages": "90--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Rogers, Jeffrey Heinz, Margaret Fero, Jeremy Hurst, Dakotah Lambert, and Sean Wibel. 2013. Cognitive and sub-regular complexity. In Formal Grammar, pages 90-108. Springer.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Aural pattern recognition experiments and the subregular hierarchy",
"authors": [
{
"first": "James",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"K"
],
"last": "Pullum",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Logic, Language and Information",
"volume": "20",
"issue": "3",
"pages": "329--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Rogers and Geoffrey K. Pullum. 2011. Aural pattern recognition experiments and the subregular hierarchy. Journal of Logic, Language and Informa- tion, 20(3):329-342.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Maximum likelihood estimation of factored regular deterministic stochastic languages",
"authors": [
{
"first": "Chihiro",
"middle": [],
"last": "Shibata",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 16th Meeting on the Mathematics of Language",
"volume": "",
"issue": "",
"pages": "102--113",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5709"
]
},
"num": null,
"urls": [],
"raw_text": "Chihiro Shibata and Jeffrey Heinz. 2019. Maximum likelihood estimation of factored regular determinis- tic stochastic languages. In Proceedings of the 16th Meeting on the Mathematics of Language, pages 102-113, Toronto, Canada. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Marginalizing out transition probabilities for several subclasses of PFAs",
"authors": [
{
"first": "Chihiro",
"middle": [],
"last": "Shibata",
"suffix": ""
},
{
"first": "Ryo",
"middle": [],
"last": "Yoshinaka",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Machine Learning Research -Workshops and Conference Proceedings",
"volume": "21",
"issue": "",
"pages": "259--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chihiro Shibata and Ryo Yoshinaka. 2012. Marginaliz- ing out transition probabilities for several subclasses of PFAs. Journal of Machine Learning Research -Workshops and Conference Proceedings, 21:259- 263.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "PAutomaC: A PFA/HMM learning competition",
"authors": [
{
"first": "Sicco",
"middle": [],
"last": "Verwer",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Eyraud",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "De La Higuera",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Machine Learning Research -Workshops and Conference Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sicco Verwer, R\u00e9mi Eyraud, and Colin de la Higuera. 2012. PAutomaC: A PFA/HMM learning competi- tion. Journal of Machine Learning Research -Work- shops and Conference Proceedings, 21.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Probabilistic finite-state machines -Part I",
"authors": [
{
"first": "Enrique",
"middle": [],
"last": "Vidal",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Thollard",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "De La",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Higuera",
"suffix": ""
},
{
"first": "Rafael",
"middle": [
"C"
],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carrasco",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "27",
"issue": "7",
"pages": "1013--1025",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrique Vidal, Franck Thollard, Colin de la Higuera, Francisco Casacuberta, and Rafael C. Carrasco. 2005a. Probabilistic finite-state machines -Part I. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(7):1013-1025.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Probabilistic finite-state machines -Part II",
"authors": [
{
"first": "Enrique",
"middle": [],
"last": "Vidal",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Thollard",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "De La",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Higuera",
"suffix": ""
},
{
"first": "Rafael",
"middle": [
"C"
],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carrasco",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "27",
"issue": "7",
"pages": "1026--1039",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrique Vidal, Franck Thollard, Colin de la Higuera, Francisco Casacuberta, and Rafael C. Carrasco. 2005b. Probabilistic finite-state machines -Part II. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(7):1026-1039.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Difference in log probabilities for legal and illegal forms over the course of PFA induction for toy languages. A large positive value indicates that the relevant constraint has been learned.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Reference automata for the 2-SL language characterized by the constraint *ab (top) and the 2-SP language characterized by the constraint *a. . . b (bottom). Arcs are annotated with symbols emitted and their corresponding emission probabilities.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Accuracy and complexity metrics for unrestricted PFA induction. 'Overfitting' is the difference between held-out NLL and training set NLL. N is nondeterminism and alpha is the regularization parameter \u03b1 (see Section 2.4). Runs with |Q| = 128, 256, 512 and \u03b1 = 1 on Navajo data terminated early due to numerical underflow in the calculation of the stationary distribution.",
"type_str": "figure",
"uris": null,
"num": null
}
}
}
}