ACL-OCL / Base_JSON /prefixS /json /sigmorphon /2020.sigmorphon-1.27.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:30:48.169323Z"
},
"title": "Joint learning of constraint weights and gradient inputs in Gradient Symbolic Computation with constrained optimization",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Nelson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts",
"location": {
"settlement": "Amherst Amherst",
"region": "MA",
"country": "USA"
}
},
"email": "manelson@umass.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes a method for the joint optimization of constraint weights and symbol activations within the Gradient Symbolic Computation (GSC) framework. The set of grammars representable in GSC is proven to be a subset of those representable with lexicallyscaled faithfulness constraints. This fact is then used to recast the problem of learning constraint weights and symbol activations in GSC as a quadratically-constrained version of learning lexically-scaled faithfulness grammars. This results in an optimization problem that can be solved using Sequential Quadratic Programming.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes a method for the joint optimization of constraint weights and symbol activations within the Gradient Symbolic Computation (GSC) framework. The set of grammars representable in GSC is proven to be a subset of those representable with lexicallyscaled faithfulness constraints. This fact is then used to recast the problem of learning constraint weights and symbol activations in GSC as a quadratically-constrained version of learning lexically-scaled faithfulness grammars. This results in an optimization problem that can be solved using Sequential Quadratic Programming.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper proposes a method for the joint optimization of constraint weights and symbol activations within the Gradient Symbolic Computation (GSC) framework. The set of grammars representable in GSC is proven to be a subset of those representable with lexically-scaled faithfulness constraints. This fact is then used to recast the problem of learning constraint weights and symbol activations in GSC as a quadratically-constrained version of learning lexically-scaled faithfulness grammars. This results in an optimization problem that can be solved using Sequential Quadratic Programming.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and background",
"sec_num": "1"
},
{
"text": "The remainder of this paper proceeds as follows. The rest of this section provides the relevant background on GSC, previous approaches to the same problem, and maximum entropy grammars which are used in the proposed model. \u00a72 describes and proves the relationship between GSC grammars and lexically-scaled faithfulness constraints and then uses this proof to develop the proposed learning algorithm. \u00a73 illustrates with a minimal test case of an example used through the GSC literature, French Liaison. \u00a74 provides a brief discussion and concludes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and background",
"sec_num": "1"
},
{
"text": "Gradient Symbolic Computation is a general cognitive framework in which structures are represented as gradient blends of multiple symbolic representations. Smolensky and Goldrick (2016) adapt standard optimality-theoretic constraints and optimization procedures to allow for inputs which consist of blends of symbolic structures. They propose that each position in the input is associated with a blend of discrete units, each of which is associated with an activation. In phonological terms an input may be composed of a series of positions, each of which is associated with a set of phonemes with different degrees of activation. The evaluation of constraints that make reference to the input, traditionally only faithfulness constraints, is done with respect to the activations of individual segments in the gradient representation. So if this partially active /t/ is fully realized, then a constraint like Dep, which penalizes epenthesis, will be violated to the degree that reflects the extent of this epenthesis: in this example, a violation of strength 0.3. Phonological grammars that allow for gradient inputs will henceforth be referred to as gradient symbolic grammars (GS grammars). GS grammars have been employed to capture phonological phenomena that are difficult for traditional representational theories, including opacity (Mai et al., 2018) , and exceptionality (Zimmerman, 2018; Hsu, 2018)/subregularity (Rosen, 2016; Smolensky and Goldrick, 2016).",
"cite_spans": [
{
"start": 156,
"end": 185,
"text": "Smolensky and Goldrick (2016)",
"ref_id": "BIBREF22"
},
{
"start": 1338,
"end": 1356,
"text": "(Mai et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonological grammars in Gradient Symbolic Computation",
"sec_num": "1.1"
},
{
"text": "GS grammars present a unique learning problem. In standard constraint-based grammars a phonological learner must discover the discrete underlying forms of the target language as well as the ranking or weighting of the constraints. In GS gram-mars the learner has to learn these things as well, while also learning the activations of all symbols at all positions in the underlying form. The complete GS grammar learning problem, discovering the discrete units, their activations, and the constraint ordering, has not been addressed in previous literature and will not be addressed here. Previous work has however looked at different subparts of this problem, including the learning of activations in isolation (Rosen, 2019) and the parallel learning of activations and constraint weights (Rosen, 2016; Smolensky et al., 2020) . This parallel problem is the topic of the present work.",
"cite_spans": [
{
"start": 787,
"end": 800,
"text": "(Rosen, 2016;",
"ref_id": "BIBREF18"
},
{
"start": 801,
"end": 824,
"text": "Smolensky et al., 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning gradient symbolic grammars",
"sec_num": "1.2"
},
{
"text": "Rosen (2016) presents an approach to jointly optimizing constraint weights and input activations based on simulated annealing which is able to successfully learn a grammar capturing Japanese rendaku. As will be discussed below, the joint optimization of weights and activations is nonconvex so simulated annealing is a promising approach. This work will not attempt to improve on the empirical performance of a simulated annealing model, but rather it will propose an alternative approach which is more closely related to gradientbased methods used elsewhere in the phonological learning literature (Goldwater and Johnson, 2003; Boersma and Pater, 2008; Hayes and Wilson, 2008) . Smolensky et al. (2020) apply the Gradual Learning Algorithm (GLA) for Harmonic Grammar (Boersma and Pater, 2008) , which is based on the Perceptron Update Rule (Rosenblatt, 1958) , to the problem of learning both constraint weights and input activations. They report promising results, however the convergence proof for the GLA does not necessarily apply to the case of GS grammars, where multiple interacting parameters are being simultaneously optimized. As will be discussed later, activations add quadratic terms to the Harmony function. This means that Harmonies are not linear in the parameters and consequently the relationship between Harmonic Grammar and the Perceptron does not hold between GS grammar and the Perceptron. This work presents a third approach to jointly learning activations and constraint weights, based on the fact that blended inputs represent a scaling function on faithfulness violations and on previous work which has explored the learning of scaled faithfulness. The presented model is also not guaranteed to converge on a global optimum, so it does not improve on the GLA approach in that respect. It does however have the benefit of casting the GS grammar learning problem as an explicit and well-understood optimization procedure while also relating it to a familiar problem, learning lexicallyscaled constraint weights (Hughto et al., 2019) .",
"cite_spans": [
{
"start": 599,
"end": 628,
"text": "(Goldwater and Johnson, 2003;",
"ref_id": "BIBREF6"
},
{
"start": 629,
"end": 653,
"text": "Boersma and Pater, 2008;",
"ref_id": "BIBREF3"
},
{
"start": 654,
"end": 677,
"text": "Hayes and Wilson, 2008)",
"ref_id": "BIBREF7"
},
{
"start": 680,
"end": 703,
"text": "Smolensky et al. (2020)",
"ref_id": "BIBREF24"
},
{
"start": 768,
"end": 793,
"text": "(Boersma and Pater, 2008)",
"ref_id": "BIBREF3"
},
{
"start": 841,
"end": 859,
"text": "(Rosenblatt, 1958)",
"ref_id": "BIBREF20"
},
{
"start": 2036,
"end": 2057,
"text": "(Hughto et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning gradient symbolic grammars",
"sec_num": "1.2"
},
{
"text": "Unlike previous work in GSC, the learning algorithm in the present work will make use of Maximum Entropy (MaxEnt) Grammars (Goldwater and Johnson, 2003) . A MaxEnt grammar is a log-linear model which allows for the probabilistic interpretation of a Harmonic Grammar (HG). In Harmonic Grammar the Harmony H of a candidate is the dot-product of its constraint violations and the constraint weights. Constraint violations are generally treated as strictly negative and weights as strictly positive, so given an input x a candidate y is optimal if it has the highest Harmony score in the set of all competing candidates Y(x).",
"cite_spans": [
{
"start": 123,
"end": 152,
"text": "(Goldwater and Johnson, 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H (x,y) = i w i c i (x, y)",
"eq_num": "(1)"
}
],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "A MaxEnt probability distribution is computed by applying the softmax function to the set of Harmonies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(x) = e H (x,y) \u03b3\u2208Y(x) e H (x,\u03b3)",
"eq_num": "(2)"
}
],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "MaxEnt grammars are used for the learning algorithm purely because it is intuitive to define an interpretable loss function when model outputs are a probability distribution, as will be discussed in \u00a72.2. This is an expository choice: the learning algorithm presented below could be equivalently described as learning a Harmonic Grammar by minimizing a loss function that incorporates the softmax function. Because softmax is monotonic, a MaxEnt grammar makes the same prediction about the most well-formed candidate as its corresponding Harmonic Grammar. The observation driving the proposed learning algorithm for GS grammars is that GS grammars can be rewritten as a special case of lexically-scaled faithfulness (LSF) grammars. An LSF grammar (Linzen et al., 2013 ) is a grammar in which all morphemes come with a set of scales which combine additively with constraint weights. This section aims to prove that the set of expressible GS grammars is a subset of the expressible LSF grammars. In this work I assume all outputs are discrete structures and consequently only faithfulness constraints are gradiently evaluated 1 . Within the faithfulness constraints, Smolensky and Goldrick (2016) describe two classes in terms of how gradient activations in the input influence evaluation. Constraints belonging to the PROPORTIONAL class are violated to a degree proportional to the activation level of a deleted feature or segment, for example MAX constraints in Smolensky and Goldrick. Constraints belonging to the COMPLEMENT class are violated to a degree proportional to one minus the activation level of a realized feature or segment, for example DEP constraints in Smolensky and Goldrick. Introducing gradient inputs to the grammar results in a rescaling of faithfulness constraint violations and in no effect on markedness constraint violations 2 .",
"cite_spans": [
{
"start": 747,
"end": 767,
"text": "(Linzen et al., 2013",
"ref_id": "BIBREF14"
},
{
"start": 1165,
"end": 1194,
"text": "Smolensky and Goldrick (2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "Consider the simple GS tableau in 1, where \u03b1 is the activation of the input segment b, M is the weight of a PROPORTIONAL constraint, and \u2206 is the weight of some COMPLEMENT constraint. Two hypothetical candidates are competing on which of the two constraints is violated. Note that Harmony is a quadratic function of the weights and activations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "M \u2206 /b \u03b1 / PROP COMP H \u03c6 0 1 \u2212 \u03b1 \u2206 \u2212 \u03b1\u2206 \u03c8 \u03b1 0 \u03b1M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "Now consider the grammar in (2), which uses lexically scaled faithfulness (LSF) constraints. The scales are indexed to the input morpheme(s) and combine additively with constraint weights. So the functional weight of PROP when evaluated on the ith morpheme is the general weight of PROP, M , added with the scale brought by morpheme i, \u00b5 i 1 In GSC this is expressed a strong quantization constraint, which pushes outputs into discrete states (Smolensky et al., 2014; Cho et al., 2017) 2 Zimmerman (2018) advocates for gradient outputs, which will allow for gradiently evaluated markedness constraints. The approach outlined below can be extended to cover this by allowing for lexically scaled markedness constraints as well (Linzen et al., 2013) . In this case Harmony is a linear function of the weights and scales.",
"cite_spans": [
{
"start": 443,
"end": 467,
"text": "(Smolensky et al., 2014;",
"ref_id": "BIBREF23"
},
{
"start": 468,
"end": 485,
"text": "Cho et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 725,
"end": 746,
"text": "(Linzen et al., 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "\u00b5 i \u03b4 i M \u2206 /b/ i PROP COMP H [b] 0 1 \u2206 + \u03b4 i \u2205 1 0 M + \u00b5 i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "The tableaux in (1) and (2) make identical predictions as long as the equalities in Eq. 3hold. In other words if these equalities are true then the two grammars assign the exact same Harmonies to the candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2206 \u2212 \u03b1\u2206 = \u2206 + \u03b4 i \u03b1M = M + \u00b5 i",
"eq_num": "(3)"
}
],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "Given this fact, any GS grammar can be converted into an LSF grammar by replacing any morpheme's activation values with a set of scales. Scales for COMP and PROP constraints can be computed from activations by rearranging Eq. 3, as in Eq. 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b4 i = \u2212\u03b1\u2206 \u00b5 i = \u03b1M \u2212 M",
"eq_num": "(4)"
}
],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "Eq. (4) proves that any function representable with a GS grammar can be expressed with an equivalent LSF grammar. The converse however is not true -there are functions representable in LSF grammars that are not representable in GS grammars. This is be illustrated by considering how Eq. (3) would be used to convert an arbitrary LSF grammar into a GS grammar. Converting in this direction requires computing activations from the set of lexical scales. By rearranging Eq. 3, we see that there are two ways to compute activations from a given LSF grammar. Activations can be computed either from the MAX constraints or from the DEP constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 = \u00b5 i M + 1 \u03b1 = \u2212 \u03b4 i \u2206",
"eq_num": "(5)"
}
],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "It is not possible for a single segment or feature to have multiple distinct activation levels. An LSF grammar is a valid GS grammar only if both methods of computing a yield the same result. So while there is an LSF grammar for every GS grammar, there is not a GS grammar for every LSF grammar. Only the subset of LSF grammars that satisfy the equality in Eq. (6) are valid GS grammars.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "\u00b5 i M + 1 = \u2212 \u03b4 i \u2206 (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "For simplicity, Eq. (6) can be rearranged as in Eq. 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00b5 i \u2206 + M \u2206 + \u03b4 i M = 0",
"eq_num": "(7)"
}
],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "This does not necessarily mean anything about the linguistic expressivity of GS and LSF grammars. The conversion from GS to LSF grammar assumes that there are no limits on the constraint set and consequently may require theoretically unwieldy constraints. For example in order to capture the fact that there are separate activations at all positions in the input, there must be separate constraints for every feature at every position in the input. This point is ultimately unimportant for the present work, which aims to address the relationship between the mathematical, rather than linguistic, functions that are representable in the two theories with the purpose of leveraging this relationship to construct a learning algorithm for GS grammars. The next section will outline exactly how this subset-superset relationship can be used to to formulate the problem of simultaneously learning input activations and constraint weights as a quadratically constrained optimization problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy grammars",
"sec_num": "1.3"
},
{
"text": "The relationship between GS and LSF grammars described above is useful because it allows the problem of learning constraint weights and activations to be related to a well-understood problem, learning constraint weights and additive scales. Additive scales are themselves a special case of another formalism, lexically-indexed constraints. Because the scaled violations combine additively in the Harmony function, lexical scales can be represented as indexed versions of their general form which always incur the same number of violations as the general form. Moore-Cantwell and Pater (2016) show that the problem of learning lexically-indexed constraint weights is no different than the standard MaxEnt optimization problem and Hughto et al. (2019) show that similar approaches can be taken to learning additive lexical scales. So, like in standard MaxEnt (Goldwater and Johnson, 2003) , the task of learning an LSF grammar can be cast as optimizing the negative log-likelihood of the training data 3 , which is convex in the constraint weights. Unfortunately, because of the subset-superset relationship between GS and LSF grammars, the problem of learning GS grammars is not similarly reducible to the standard convex MaxEnt learning problem. Rather, the GS learning problem can be reduced to a constrained version of the LSF learning problem. Learning a GS grammar is equivalent to learning an LSF grammar subject to the hard constraint that the LSF grammar represents a possible GS grammar. This can be stated formally as the optimization problem in Eq. 8, where p(x) is computed using the standard MaxEnt probability function in Eq. (2). The weight vector w includes: General PROP and COMP weights M and \u2206, i lexically indexed scales on PROP \u00b5 1 , ..., \u00b5 i , i lexically indexed scales on COMP \u03b4 1 , ..., \u03b4 i , and n general markedness constraints m 1 , ..., m n . The rightmost term in the objective function is an L2 prior with strength \u03bb.",
"cite_spans": [
{
"start": 560,
"end": 591,
"text": "Moore-Cantwell and Pater (2016)",
"ref_id": "BIBREF16"
},
{
"start": 729,
"end": 749,
"text": "Hughto et al. (2019)",
"ref_id": "BIBREF9"
},
{
"start": 857,
"end": 886,
"text": "(Goldwater and Johnson, 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning gradient symbolic grammars with constrained optimization",
"sec_num": "2.2"
},
{
"text": "w = [M, \u2206, \u00b5 1 ..., \u00b5 i , \u03b4 1 ..., \u03b4 i , m 1 , ..., m n ] min w \u2212 x log p(x) + \u03bb || w || 2 Subject to: i (\u00b5 i \u2206 + M \u2206 + \u03b4 i M ) 2 = 0 (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning gradient symbolic grammars with constrained optimization",
"sec_num": "2.2"
},
{
"text": "The constraint enforcing that the learned grammar is a viable GS grammar is the equality relationship in Eq. (7) summed over all input phonemes i. The constraint is squared within the sum to prevent positive and negative terms in the summation from canceling out. This ensures that activations computed for a given phoneme and morpheme index from both the PROP and COMP constraints will be guaranteed to return the same value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning gradient symbolic grammars with constrained optimization",
"sec_num": "2.2"
},
{
"text": "There are a number of potential approaches to constrained optimization problems like that posed above. It is worth mentioning here why methods familiar in computational phonology will not work. Maximum Entropy and Harmonic Grammars are generally fit using projected gradient descent, which is itself a method of constrained optimization. This entails computing the weight update, independent of any constraints placed on the weights, and then projecting the updated weights onto the set defined by the constraint. The use familiar in phonology is in the enforcement non-negativitya restriction against negative weights which maintains the theoretical tenant of Optimality Theory that constraints can penalize but not reward. In this case projected gradient descent is effective. Not only is the projection function simple to compute because the nearest non-negative number to any negative number is 0, but the space defined by the constraint is a convex set, meaning that projected gradient descent with this constraint has the same convergence guarantees as standard gradient descent (Levitin and Polyak, 1966) . As defined in Eq. (8) the current problem is quadratically constrained, meaning that the set that satisfies the constraint is non-convex and a projection function onto the set is not easily computable. Consequently projected gradient descent is not only not guaranteed to converge, it is computationally intractable.",
"cite_spans": [
{
"start": 1077,
"end": 1111,
"text": "descent (Levitin and Polyak, 1966)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning gradient symbolic grammars with constrained optimization",
"sec_num": "2.2"
},
{
"text": "Another possible approach would be to treat the constraint as a prior. One simple issue with this is that priors are violable. Given that the goal is learn a GS grammar, the constraint on the solution space defined above cannot be violated. One possible workaround would be to set the strength of the prior arbitrarily high, making it functionally non-violable. However the intersection of the loss function and the space satisfying the constraint is non-convex and is not guaranteed to be connected. Consequently gradient descent and other widely applied optimization techniques are likely to fail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning gradient symbolic grammars with constrained optimization",
"sec_num": "2.2"
},
{
"text": "The proposed solution is to use Sequential Quadratic Programming (SQP), an iterative generalization of Newton's method developed for minimizing a function under quadratic constraints. The general approach is to iteratively take the quadratic approximation of the constrained objective function at w, minimize this subproblem with quadratic programming, and then set w to the solution. This will yield increasingly better approximations and therefore increasingly better solutions. On a practical note, this requires computing the first three terms of the Taylor expansion of the objective function at a given point, meaning that it must be twice differentiable. For detailed derivation and discussion of the method see Boggs and Tolle (1995) .",
"cite_spans": [
{
"start": 719,
"end": 741,
"text": "Boggs and Tolle (1995)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning gradient symbolic grammars with constrained optimization",
"sec_num": "2.2"
},
{
"text": "To illustrate the promise of the proposed approach to learning GS grammars, this section applies it to a minimal example of the French liaison problem that Smolensky and Goldrick (2016) use to motivate the use of gradient representations in the phonological grammar. Liaison is a phenomenon in which, in certain syntactic contexts, a consonant surfaces between vowel-final and vowel-initial words when hiatus would otherwise occur. The identity of this consonant, the liaison consonant, is not phonologically predictable. There is a long literature on the phonological analysis on liaison and its interacting processes, including competing analyses that propose that the liaison consonant is specified by the first word (Tranel, 1996) and by the second word in the sequence (Morin, 2005) .",
"cite_spans": [
{
"start": 156,
"end": 185,
"text": "Smolensky and Goldrick (2016)",
"ref_id": "BIBREF22"
},
{
"start": 720,
"end": 734,
"text": "(Tranel, 1996)",
"ref_id": "BIBREF25"
},
{
"start": 774,
"end": 787,
"text": "(Morin, 2005)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An example",
"sec_num": "3"
},
{
"text": "There is a class of words which are phonologically vowel initial but exceptionally do not trigger the surfacing of a liaison consonant in environments where it is otherwise predicted to surface. These words are always the second word in the pair and are called the h-aspir\u00e9 words, referencing the fact that they are orthographically h-initial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example",
"sec_num": "3"
},
{
"text": "Consider the following set of French surface forms. When petit comes together with ami, a vowel-initial word, the liaison consonant [t] surfaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example",
"sec_num": "3"
},
{
"text": "[p\u00f8ti] petit",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example",
"sec_num": "3"
},
{
"text": "+ [ami] ami 'small' 'friend' [p\u00f8titami] petit ami 'boyfriend'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example",
"sec_num": "3"
},
{
"text": "However, when petit is followed by h\u00e9ros, an h-aspir\u00e9 word, no liaison consonant surfaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example",
"sec_num": "3"
},
{
"text": "[p\u00f8ti] petit",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example",
"sec_num": "3"
},
{
"text": "+ [eKo] h\u00e9ros 'small' 'hero' [p\u00f8tieKo] petit h\u00e9ros 'little hero'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example",
"sec_num": "3"
},
{
"text": "The adjective [p\u00f8ti] petit is associated with a liaison t. When it occurs in isolation the liaison consonant does not surface, however when it occurs before the vowel-initial [ami] ami the liaison consonant surfaces, preventing two adjacent vowels from surfacing. Despite being vowel-initial, the h-aspir\u00e9 word [eKo] h\u00e9ros does not trigger the surfacing of the liaison consonant when it surfaces after peti. Smolensky and Goldrick (2016) offer an analysis of this phenomenon couched in Gradient Symbolic Computation, which suggests that the liaison consonant is specified by both the first and second word in the pair. In their analysis both words contain partially active edge consonants. When the words surface together the combined activation is enough to get cause the liaison consonant to surface. In this analysis h-aspir\u00e9 words differ from their liaisonparticipating counterparts in that they have no or minimal activation on liaison consonants at their left edge, preventing them from contributing to the combined activation.",
"cite_spans": [
{
"start": 14,
"end": 20,
"text": "[p\u00f8ti]",
"ref_id": null
},
{
"start": 408,
"end": 437,
"text": "Smolensky and Goldrick (2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An example",
"sec_num": "3"
},
{
"text": "In terms of the minimal dataset above, they propose that there is a partially-activated /t/ in the input at both the right edge of peti and at the left edge of ami. When either word occurs in isolation there is not sufficient activation of the /t/ for it to surface. When the two words surface adjacent to one another the combined activation of /t/ in both words overcomes a threshold and liaison [t] surfaces. In the h-aspir\u00e9 h\u00e9ros there is little to no activation on an input /t/ at the left edge. Despite the consequence of realizing a marked vowel-vowel sequence, the liaison [t] does not surface between [p\u00f8ti] and [eKo] because the combined activation of the input /t/s is not enough to justify its realization. They argue that this analysis overcomes empirical shortcomings of analyses which place the onus of specifying the liaison consonant on exclusively the first or second consonant, see Smolensky and Goldrick (2016) and Smolensky et al. (2020) for detailed discussion.",
"cite_spans": [
{
"start": 397,
"end": 400,
"text": "[t]",
"ref_id": null
},
{
"start": 900,
"end": 929,
"text": "Smolensky and Goldrick (2016)",
"ref_id": "BIBREF22"
},
{
"start": 934,
"end": 957,
"text": "Smolensky et al. (2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An example",
"sec_num": "3"
},
{
"text": "As proof of concept a GS grammar was fit to these data using the procedure described above. Model parameters include the weight of three constraints, HIATUS, MAX(t) and DEP(t), as well as the activation levels of liaison /t/ at the left edge of petit and at the right edge of ami and h\u00e9ros. MAX(t) is a PROP constraint and DEP(t) is a COMP constraint. In every tableau there are two competing candidates, one in which [t] surfaces and one in which it does not. Activations were constrained to being positive by adding the constraint in Eq. (9) to the optimization procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i min \u00b5 i M + 1, 0 = 0",
"eq_num": "(9)"
}
],
"section": "An example",
"sec_num": "3"
},
{
"text": "In practice the Jacobian and Hessian of the objective function are estimated analytically, so the algorithm described above is non-deterministic. The quadratically-constrained optimization problem is also generally non-convex, so variation is expected across runs. Consequently 10 models were fit with weights randomly initialized in [-2,0) . An L2 prior is included with \u03bb = 0.01. The average activations of input /t/s in all words are shown in Table ( 2). Recall that there are two possible ways to compute the activations, from the COMP or PROP constraints. To ensure that the model works correctly, both methods of computing activations are shown. Note that these are negligibly different, confirming that the final grammar is indeed a valid GS grammar.",
"cite_spans": [
{
"start": 334,
"end": 340,
"text": "[-2,0)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 446,
"end": 453,
"text": "Table (",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "An example",
"sec_num": "3"
},
{
"text": "PROP p\u00f8ti(t) 0.296 (0.062) 0.296 (0.062) (t)ami 0.614 (0.081) 0.614 (0.081) (t)eKo -2e-5 (6e-5) -1e-4 (2e-4) The activations suggest that the model may be converging on a solution that resembles the analysis proposed by Smolensky and Goldrick. Petit and ami both have a partially-activated /t/ in the at the relevant edge, while the activation of liaison /t/ in h\u00e9ros is approximately 0. The individual tableaux confirm that the learned analysis resembles Smolensky and Goldrick's. For simplicity, and consistency with previous work, all tableaux will be presented without probabilities, as HG tableaux.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "While petit and ami both have partially-activated underlying /t/s, the activation is low enough that when either of these words occur in isolation the /t/ is not realized. This is demonstrated in Tableaux (3) and (4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "(3) -13. In h\u00e9ros the underlying liaison /t/ has a 0 activation, so it trivially does not surface in isolation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "-13.1 -5.3 -0.4 /t 0.00 eKo/ DEP(t) HIATUS MAX(t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "H [eKo] 0 0 0.0 -0.0 [teKo]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "1.0 0 0 -13.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "When petit and ami are realized next to one another, their combined activation, as well as the threat of a HIATUS violation, are enough to make the liaison consonant surface.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "-13.1 -5.3 -0.4 /p\u00f8ti t 0.30+0.61 ami/ DEP(t) HIATUS MAX(t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "H [p\u00f8tiami] 0 1 1.01 -5.70 [p\u00f8titami] 0.01 0 0 -0.13",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "However this is not the case when petit and h\u00e9ros surface together. Because h\u00e9ros contributes 0 activation to /t/, the cost of epenthesizing the remaining activation needed for the /t/ to be realized does not outweigh the cost of incurring a HIATUS violation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "-13.1 -5.3 -0.4 /p\u00f8ti t 0.30+0.00 eKo/ DEP(t) HIATUS MAX(t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "H [p\u00f8tieKo] 0 1 0.30 -5.42 [p\u00f8titeKo] 0.70 0 0 -9.17",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "The presented learning algorithm for GS grammars reliably converges on the analysis of French liaison offered by Smolensky and Goldrick (2016) as a motivating pattern for the inclusion of gradient inputs in the phonological grammar. This serves to illustrate the fact that the proposed learning algorithm is capable of learning interpretable GS grammars and has promising application in future work, both in finding GSC analyses of linguistic phenomena and in evaluating the learnability of phenomena in the GSC framework.",
"cite_spans": [
{
"start": 113,
"end": 142,
"text": "Smolensky and Goldrick (2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "COMP",
"sec_num": null
},
{
"text": "This paper has presented a method for the joint optimization of blended inputs and constraint weights in gradient symbolic grammars. The proposed method leverages the fact that the set of functions representable by GS grammars is a subset of those representable by lexically-scaled faithfulness grammars to cast the GS grammar learning problem as a constrained version of the LSF grammar learning problem. The primary aim of this work is to introduce and justify the method, rather than discuss its implications for linguistic theory, however points of interest to linguistic theory will be briefly addressed here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "4"
},
{
"text": "The subset-superset relationship that was shown to hold between GS and LSF grammars does not make predictions regarding the expressivity of the two theories in terms of the linguistic phenomena they are capable of representing. It does, however, highlight differences between the two theories which may provide a starting point for comparing their linguistic expressivity. For example, representing GS grammars in the LSF framework requires a set of faithfulness constraints which make reference to every position in every input. This differs from standard approaches to positional faithfulness, where faithfulness constraints make reference to prosodic positions (Beckman, 1998) , and may yield pathological predictions. Consequently, despite the fact that LSF grammars represent a greater range of functions, it is likely that there are phenomena that can be captured with GS grammars but not with LSF grammars given a limited constraint set. This is left to future work. This work has also shown that the optimization problem for GS grammars is likely more difficult than the analogous problem in other frameworks designed to capture the same types of phonological phenomena. For example, grammars with lexicallyscaled constraints like those mentioned throughout this paper have also been shown to capture lexical exceptionality and subregularity but, as described, they correspond to a convex optimization problem. Similarly, grammars with underlying representation constraints have also been shown to be a viable approach to capturing these phonological phenomena (Apoussidou, 2007; Smith, 2015) and, in learning problems like that described in this paper present a convex optimization problem. The critical difference between these approaches and GS grammars is that Harmony function for GS grammars is quadratic, consequently the optimization problem is not guaranteed to be convex. It is not necessarily the case that the complexity of the related optimization problems is a valid metric along which to compare linguistic theories. Previous work however, has made strong claims regarding the relationship between the numerical optimization of Max-Ent/HG grammars and the learning trajectories of human language learners (Boersma et al., 2000; J\u00e4ger, 2007; Tessier, 2008, 2011) , in which case there may be merit in comparing the optimization procedure for competing theories.",
"cite_spans": [
{
"start": 664,
"end": 679,
"text": "(Beckman, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 1569,
"end": 1587,
"text": "(Apoussidou, 2007;",
"ref_id": "BIBREF0"
},
{
"start": 1588,
"end": 1600,
"text": "Smith, 2015)",
"ref_id": "BIBREF21"
},
{
"start": 2228,
"end": 2250,
"text": "(Boersma et al., 2000;",
"ref_id": "BIBREF2"
},
{
"start": 2251,
"end": 2263,
"text": "J\u00e4ger, 2007;",
"ref_id": "BIBREF10"
},
{
"start": 2264,
"end": 2284,
"text": "Tessier, 2008, 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "4"
},
{
"text": "The broader GSC framework offers a novel theory of phonological grammars, the expressivity and restrictiveness of which has not been thoroughly explored. This work hopes to facilitate further research by introducing a method for simultaneously learning constraint weights and input activations of GS grammars which both relates GS grammars to an existing phonological framework and serves as a tool in finding GS analyses of phonological phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "4"
},
{
"text": "Or other equivalent loss function, such as Kullback-Leibler divergence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Thank you to Katherine Blake, Gaja Jarosz, Andrew Lamont, Joe Pater, Brandon Prickett and everyone at UMass Sound Workshop for productive discussion of the ideas presented above, as well as to four anonymous SIGMORPHON reviewers for specific comments on this paper. All remaining errors are my own.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The learnability of metrical phonology",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Apoussidou",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Apoussidou. 2007. The learnability of metrical phonology. Ph.D. thesis, University of Amsterdam.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Positional faithfulness",
"authors": [
{
"first": "Jill",
"middle": [
"N"
],
"last": "Beckman",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jill N. Beckman. 1998. Positional faithfulness. Ph.D. thesis.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Gradual constraint-ranking learning algorithm predicts acquisition order",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Boersma",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Levelt",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of Child Language Research Forum",
"volume": "30",
"issue": "",
"pages": "229--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Boersma, Clara Levelt, et al. 2000. Gradual constraint-ranking learning algorithm predicts acqui- sition order. In Proceedings of Child Language Re- search Forum, volume 30, pages 229-237. CSLI Publications Stanford, CA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Convergence properties of a gradual learning algorithm for harmonic grammar",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Boersma",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Pater",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Boersma and Joe Pater. 2008. Convergence prop- erties of a gradual learning algorithm for harmonic grammar.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sequential quadratic programming",
"authors": [
{
"first": "T",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Jon",
"middle": [
"W"
],
"last": "Boggs",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tolle",
"suffix": ""
}
],
"year": 1995,
"venue": "Acta numerica",
"volume": "4",
"issue": "",
"pages": "1--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul T. Boggs and Jon W. Tolle. 1995. Sequential quadratic programming. Acta numerica, 4:1-51.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Incremental parsing in a continuous dynamical system: Sentence processing in gradient symbolic computation",
"authors": [
{
"first": "",
"middle": [],
"last": "Pyeong Whan",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Goldrick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 2017,
"venue": "Linguistics Vanguard",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1515/lingvan-2016-0105"
]
},
"num": null,
"urls": [],
"raw_text": "Pyeong Whan Cho, Matthew Goldrick, and Paul Smolensky. 2017. Incremental parsing in a contin- uous dynamical system: Sentence processing in gra- dient symbolic computation. Linguistics Vanguard, 3(1).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning OT constraint rankings using a maximum entropy model",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Stockholm Workshop on Variation in Optimality Theory",
"volume": "",
"issue": "",
"pages": "111--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater and Mark Johnson. 2003. Learning OT constraint rankings using a maximum entropy model. Proceedings of the Stockholm Workshop on Variation in Optimality Theory, pages 111-120.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A maximum entropy model of phonotactic and phonotactic learning",
"authors": [
{
"first": "Bruce",
"middle": [],
"last": "Hayes",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2008,
"venue": "Linguistic Inquiry",
"volume": "39",
"issue": "3",
"pages": "379--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruce Hayes and Colin Wilson. 2008. A maximum en- tropy model of phonotactic and phonotactic learning. Linguistic Inquiry, 39(3):379-440.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Scalar constraints and gradient symbolic representations generate exceptional prosodification effects without exceptional prosody",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2018,
"venue": "Handout, West Coast Conference on Formal Linguistics",
"volume": "36",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Hsu. 2018. Scalar constraints and gradient sym- bolic representations generate exceptional prosodifi- cation effects without exceptional prosody. In Hand- out, West Coast Conference on Formal Linguistics, volume 36.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning exceptionality and variation with lexically scaled maxent",
"authors": [
{
"first": "Coral",
"middle": [],
"last": "Hughto",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Lamont",
"suffix": ""
},
{
"first": "Brandon",
"middle": [],
"last": "Prickett",
"suffix": ""
},
{
"first": "Gaja",
"middle": [],
"last": "Jarosz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Second Annual Meeting of the Society for Computation in Linguistics (SCiL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Coral Hughto, Andrew Lamont, Brandon Prickett, and Gaja Jarosz. 2019. Learning exceptionality and vari- ation with lexically scaled maxent. In Proceedings of the Second Annual Meeting of the Society for Computation in Linguistics (SCiL).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Maximum entropy models and stochastic optimality theory. Architectures, rules, and preferences: variations on themes by",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard J\u00e4ger. 2007. Maximum entropy models and stochastic optimality theory. Architectures, rules, and preferences: variations on themes by Joan W. Bresnan. Stanford: CSLI, pages 467-479.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Gradual learning and faithfulness: consequences of ranked vs",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Jesney",
"suffix": ""
},
{
"first": "Anne-Michelle",
"middle": [],
"last": "Tessier",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Jesney and Anne-Michelle Tessier. 2008. Grad- ual learning and faithfulness: consequences of ranked vs. weighted constraints.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Biases in harmonic grammar: the road to restrictive learning",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Jesney",
"suffix": ""
},
{
"first": "Anne-Michelle",
"middle": [],
"last": "Tessier",
"suffix": ""
}
],
"year": 2011,
"venue": "Natural Language & Linguistic Theory",
"volume": "29",
"issue": "1",
"pages": "251--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Jesney and Anne-Michelle Tessier. 2011. Bi- ases in harmonic grammar: the road to restrictive learning. Natural Language & Linguistic Theory, 29(1):251-290.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Constrained minimization methods. USSR Computational mathematics and mathematical physics",
"authors": [
{
"first": "S",
"middle": [],
"last": "Evgeny",
"suffix": ""
},
{
"first": "Boris",
"middle": [
"T"
],
"last": "Levitin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polyak",
"suffix": ""
}
],
"year": 1966,
"venue": "",
"volume": "6",
"issue": "",
"pages": "1--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeny S. Levitin and Boris T. Polyak. 1966. Con- strained minimization methods. USSR Compu- tational mathematics and mathematical physics, 6(5):1-50.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Lexical and phonological variation in russian prepositions",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Sofya",
"middle": [],
"last": "Kasyanenko",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Gouskova",
"suffix": ""
}
],
"year": 2013,
"venue": "Phonology",
"volume": "30",
"issue": "3",
"pages": "453--515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Sofya Kasyanenko, and Maria Gouskova. 2013. Lexical and phonological variation in russian prepositions. Phonology, 30(3):453-515.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Phonological opacity as local optimization in gradient symbolic computation",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Mai",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Bakovic",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Goldrick",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Society for Computation in Linguistics",
"volume": "1",
"issue": "",
"pages": "219--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Mai, Eric Bakovic, and Matt Goldrick. 2018. Phonological opacity as local optimization in gradi- ent symbolic computation. Proceedings of the Soci- ety for Computation in Linguistics, 1(1):219-220.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Gradient exceptionality in maximum entropy grammar with lexically specific constraints",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Cantwell",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Pater",
"suffix": ""
}
],
"year": 2016,
"venue": "Catalan Journal of Linguistics",
"volume": "15",
"issue": "",
"pages": "53--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claire Moore-Cantwell and Joe Pater. 2016. Gradient exceptionality in maximum entropy grammar with lexically specific constraints. Catalan Journal of Linguistics, 15:53-66.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "La liaison rel\u00e8ve-t-elle d'une tendance\u00e0\u00e9viter les hiatus? r\u00e9flexions sur so\u0144 evolution historique",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Charles Morin",
"suffix": ""
}
],
"year": 2005,
"venue": "Langages",
"volume": "",
"issue": "2",
"pages": "8--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Charles Morin. 2005. La liaison rel\u00e8ve-t-elle d'une tendance\u00e0\u00e9viter les hiatus? r\u00e9flexions sur so\u0144 evolution historique. Langages, (2):8-23.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Predicting the unpredictable: Capturing the apparent semi-regularity of rendaku voicing in japanese through harmonic grammar",
"authors": [
{
"first": "R",
"middle": [],
"last": "Eric",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of BLS",
"volume": "42",
"issue": "",
"pages": "235--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric R. Rosen. 2016. Predicting the unpredictable: Capturing the apparent semi-regularity of rendaku voicing in japanese through harmonic grammar. In Proceedings of BLS, volume 42, pages 235-249.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning complex inflectional paradigms through blended gradient inputs",
"authors": [
{
"first": "R",
"middle": [],
"last": "Eric",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Society for Computation in Linguistics (SCiL) 2019",
"volume": "",
"issue": "",
"pages": "102--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric R. Rosen. 2019. Learning complex inflectional paradigms through blended gradient inputs. In Pro- ceedings of the Society for Computation in Linguis- tics (SCiL) 2019, pages 102-112.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The perceptron: a probabilistic model for information storage and organization in the brain",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Rosenblatt",
"suffix": ""
}
],
"year": 1958,
"venue": "Psychological review",
"volume": "65",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Rosenblatt. 1958. The perceptron: a probabilis- tic model for information storage and organization in the brain. Psychological review, 65(6):386.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Phonologically conditioned allomorphy and UR constraints",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Smith. 2015. Phonologically conditioned allo- morphy and UR constraints. Ph.D. thesis, Univer- sity of Massachusetts Amherst.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Gradient symbolic representations in grammar: The case of french liason",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Smolensky",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Goldrick",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Smolensky and Matthew Goldrick. 2016. Gradi- ent symbolic representations in grammar: The case of french liason. Technical report.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Optimization and quantization in gradient symbol systems: a framework for integrating the continuous and the discrete in cognition",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Smolensky",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Goldrick",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Mathis",
"suffix": ""
}
],
"year": 2014,
"venue": "Cognitive science",
"volume": "38",
"issue": "",
"pages": "1102--1138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Smolensky, Matthew Goldrick, and Donald Mathis. 2014. Optimization and quantization in gra- dient symbol systems: a framework for integrating the continuous and the discrete in cognition. Cogni- tive science, 38(6):1102-1138.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning a gradient grammar of French liaison",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Smolensky",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Rosen",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Goldrick",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2019 Annual Meeting on Phonology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Smolensky, Eric Rosen, and Matthew Goldrick. 2020. Learning a gradient grammar of French liai- son. In Proceedings of the 2019 Annual Meeting on Phonology, Stonybrook NY.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "French liaison and elision revisited: A unified account within optimality theory. Aspects of Romance linguistics",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Tranel",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "433--455",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Tranel. 1996. French liaison and elision re- visited: A unified account within optimality theory. Aspects of Romance linguistics, pages 433-455.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Gradient Symbolic Representations in the output: A case study from Moses Columbian Salishan stress",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Zimmerman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Forty-Eigth Annual Meeting of the North East Linguistic Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Zimmerman. 2018. Gradient Symbolic Represen- tations in the output: A case study from Moses Columbian Salishan stress. In Proceedings of the Forty-Eigth Annual Meeting of the North East Lin- guistic Society.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td colspan=\"2\">five tableaux across the 10 runs.</td></tr><tr><td colspan=\"2\">Candidate avg.</td><td>s.d.</td></tr><tr><td>[p\u00f8ti]</td><td colspan=\"2\">0.999 1e-4</td></tr><tr><td>[p\u00f8tit]</td><td>0.001</td></tr><tr><td>[ami]</td><td colspan=\"2\">0.991 0.003</td></tr><tr><td>[tami]</td><td>0.008</td></tr><tr><td>[eKo]</td><td colspan=\"2\">0.999 2e-7</td></tr><tr><td>[teKo]</td><td>1e-6</td></tr><tr><td colspan=\"3\">[p\u00f8tit ami] 0.980 0.009</td></tr><tr><td colspan=\"2\">[p\u00f8ti ami] 0.020</td></tr><tr><td colspan=\"3\">[p\u00f8tit eKo] 0.015 0.005</td></tr><tr><td colspan=\"2\">[p\u00f8ti eKo] 0.985</td></tr><tr><td colspan=\"3\">Table 1: Average final probability across 10 runs on all</td></tr><tr><td colspan=\"3\">forms. indicates the target surface forms.</td></tr><tr><td>1) shows the</td><td/></tr><tr><td>average final probability of each candidate in the</td><td/></tr></table>",
"num": null,
"html": null,
"text": "",
"type_str": "table"
},
"TABREF1": {
"content": "<table/>",
"num": null,
"html": null,
"text": "Average (s.d.) activation of liaison consonants in all words as computed from the \u2206 and M constraints.",
"type_str": "table"
}
}
}
}