ACL-OCL / Base_JSON /prefixW /json /W18 /W18-0301.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W18-0301",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:30:13.911745Z"
},
"title": "Statistical learning theory and linguistic typology: a learnability perspective on OT's strict dominatio\u0144",
"authors": [
{
"first": "Emile",
"middle": [],
"last": "Enguehard",
"suffix": "",
"affiliation": {},
"email": "emile.enguehard@ens.fr"
},
{
"first": "Edward",
"middle": [],
"last": "Flemming",
"suffix": "",
"affiliation": {},
"email": "flemming@mit.edu"
},
{
"first": "Giorgio",
"middle": [],
"last": "Magri",
"suffix": "",
"affiliation": {},
"email": "magrigrg@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper develops a learnability argument for strict domination by looking at the generalization error of learners trained on OT and HG target grammars. The argument is based on both a review of error bounds in the recent statistical learning literature and simulation results on realistic phonological test cases.",
"pdf_parse": {
"paper_id": "W18-0301",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper develops a learnability argument for strict domination by looking at the generalization error of learners trained on OT and HG target grammars. The argument is based on both a review of error bounds in the recent statistical learning literature and simulation results on realistic phonological test cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "According to Optimality Theory (OT; Prince and Smolensky 2004) , constraint interaction in natural language phonology is severely constrained by the hypothesis of strict domination. According to this hypothesis, \"the constraints [are] arranged in a hierarchy\" and \"each constraint is strictly more important than -takes absolute priority over -all the constraints lower-ranked in the hierarchy. [. . . ] Strict domination thus limits drastically the range of possible strength-interactions between constraints to those representable with the algebra of total order\" (Prince and Smolensky, 1997) . This hypothesis of strict domination has been challenged in the recent phonological literature (Pater, 2009; Pater, 2016) , which has therefore started to explore an implementation of constraint-based phonology which does away with strict domination, known as Harmonic Grammar (HG; Legendre et al., 1990a,b; . Section 2 re-assesses the OT versus HG debate, concluding that HG over-generates for many natural constraint sets and that natural language phonology thus supports OT's hypothesis of strict domination.",
"cite_spans": [
{
"start": 36,
"end": 62,
"text": "Prince and Smolensky 2004)",
"ref_id": "BIBREF24"
},
{
"start": 229,
"end": 234,
"text": "[are]",
"ref_id": null
},
{
"start": 395,
"end": 403,
"text": "[. . . ]",
"ref_id": null
},
{
"start": 566,
"end": 594,
"text": "(Prince and Smolensky, 1997)",
"ref_id": "BIBREF23"
},
{
"start": 692,
"end": 705,
"text": "(Pater, 2009;",
"ref_id": "BIBREF20"
},
{
"start": 706,
"end": 718,
"text": "Pater, 2016)",
"ref_id": "BIBREF21"
},
{
"start": 879,
"end": 904,
"text": "Legendre et al., 1990a,b;",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Why should constraint interaction in natural language phonology display strict domination? conjecture that \"demands of learnability [might] provide a pressure for strict domination among constraints\" although they admit that \"it remains an open problem to formally characterize exactly what is essential about strict domination to guarantee efficient learning.\" ) take a closer look at this alleged connection between strict domination and learnability. They look at error bounds in terms of a classical measure of the learning complexity of a hypothesis class, namely its Vapnik-Chervonenkis (VC) dimension (Vapnik and Chervonenkis, 1971) . But they find that the VC dimension is the same for OT and HG, despite OT typologies being smaller than HG typologies because of strict domination. They conclude that, \"though there may be factors that favor one model [OT or HG] over the other, the complexity of learning [. . . ] is not one of them.\" Yet, VC dimension is an old measure of learning complexity (it dates back to the seventies) which is inevitably coarse as it applies to completely arbitrary classifiers. Since Schapire et al. (1998) , statistical learning theory has instead focused on a special class of classifiers, namely voting classifiers which aggregate the \"votes\" of more basic classifiers scaled through corresponding weights. For this special class of classifiers, better error bounds have been developed, which take into account the margin of \"confidence\" with which a classifier succeeds on the data. More recently, Koltchinskii and Panchenko (Koltchinskii and Panchenko, 2002; Koltchinskii et al., 2003b; Koltchinskii et al., 2003a; Koltchinskii and Panchenko, 2005) have further refined margin theory through error bounds which depend not only on the margin but also on the rate of decay of the weights of the basic classifiers: the bounds get better (that is provide guarantees for a smaller generalization error) as the rate of decay increases.",
"cite_spans": [
{
"start": 608,
"end": 639,
"text": "(Vapnik and Chervonenkis, 1971)",
"ref_id": "BIBREF32"
},
{
"start": 860,
"end": 870,
"text": "[OT or HG]",
"ref_id": null
},
{
"start": 914,
"end": 922,
"text": "[. . . ]",
"ref_id": null
},
{
"start": 1120,
"end": 1142,
"text": "Schapire et al. (1998)",
"ref_id": null
},
{
"start": 1538,
"end": 1599,
"text": "Koltchinskii and Panchenko (Koltchinskii and Panchenko, 2002;",
"ref_id": "BIBREF10"
},
{
"start": 1600,
"end": 1627,
"text": "Koltchinskii et al., 2003b;",
"ref_id": "BIBREF13"
},
{
"start": 1628,
"end": 1655,
"text": "Koltchinskii et al., 2003a;",
"ref_id": "BIBREF12"
},
{
"start": 1656,
"end": 1689,
"text": "Koltchinskii and Panchenko, 2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Crucially, HG and OT grammars can be construed as voting classifiers with the phonological constraints playing the role of the basic classifiers. Section 3 thus brings Koltchinskii and Panchenko's result to bear on the debate between HG and OT, through the well known characterization of OT as a special case of HG with weights decreasing fast, specifically exponentially.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 4 complements these theoretical results with simulation-based estimates of the generalization error (codes and data are provided as online supplements). We look at two test cases related to vowel harmony and syllable types. We compute the corresponding typologies of OT grammars and HG-non-OT grammars (namely HG grammars with no OT correspondent). For both types of target grammars, we compute the generalization error of the hypothesis that performs better (that is, has the largest margin) on a training set of cardinality n. We show that on average the generalization error decreases faster as a function of n for the OT targets than for the HGnon-OT ones. Section 5 concludes the paper and discusses various issues to explore in future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As reviewed above, HG fundamentally differs from OT because it does away with strict domination and therefore allows for gang effects in which multiple violations of lower-weighted constraints outweigh a violation of a higher-weighted constraint (see section 3 for details). Bane and Riggle (2009) show that sets of constraints drawn from the phonological literature yield much richer typologies in HG than in OT as a result of gang effects, and that many of the additional patterns derived under HG are unattested. The same point is made by the investigation of Kaun's (2004) analysis of the typology of rounding harmony discussed in section 4. However these constraint sets were developed in the context of OT, so these results leave open the possibility that a revised HG constraint set could provide a closer match to natural language typology. In this section we see that there is reason to doubt that the problem of typological over-generation faced by HG phonology can be solved in this way. The evidence comes from classes of problematic gang effects that arise from basic and uncontroversial constraints.",
"cite_spans": [
{
"start": 275,
"end": 297,
"text": "Bane and Riggle (2009)",
"ref_id": "BIBREF1"
},
{
"start": 563,
"end": 576,
"text": "Kaun's (2004)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The OT versus HG debate",
"sec_num": "2"
},
{
"text": "For example AGREE(place) penalizes heterorganic clusters, and *g penalizes voiced velar stops. The weighting of these constraints in figure 1a derives a pattern in which only [g] undergoes place assimilation because IDENT(place) outweighs each markedness constraint individually, but heterorganic [g] violates both constraints, which together outweigh IDENT(place). This pattern cannot be derived by any ranking of these constraints in OT: to block general place assimilation, IDENT(place) must outrank AGREE(place), but that ranking prevents assimilation of [g] as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The OT versus HG debate",
"sec_num": "2"
},
{
"text": "Place assimilation targeting only [g] is unattested (velars resist place assimilation more than coronals and labials and voicing does not affect place assimilation (Jun, 2004) ), but once HG is adopted, it is hard to avoid predicting the existence of this process because its derivation does not depend on the specific formulations of AGREE(place) and *g. The prediction follows as long as there is some constraint that penalizes heterorganic consonant clusters over homorganic clusters, which is necessary to account for place assimilation, and some constraint that penalizes [g] more than [b, d] and voiceless stops, which is necessary to account for a variety of phenomena, including languages such as Thai that allow voiced stops but not [g] (Ohala, 1983) .",
"cite_spans": [
{
"start": 591,
"end": 597,
"text": "[b, d]",
"ref_id": null
},
{
"start": 746,
"end": 759,
"text": "(Ohala, 1983)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The OT versus HG debate",
"sec_num": "2"
},
{
"text": "Variants of this configuration are easy to generate, e.g. *p (Hayes, 1999) can replace *g to derive place assimilation that only targets [p], or AGREE(place) can be replaced by AGREE(voice) to derive a pattern in which mixed-voicing clusters are tolerated unless they contain [g], in which case devoicing applies. Neither pattern has been reported in spite of thorough investigations of the typologies of place and voicing assimilation. More generally, HG predicts that any markedness constraints that mention the same feature specification in compatible contexts should be able to gang up on faithfulness constraints regulating that feature.",
"cite_spans": [
{
"start": 61,
"end": 74,
"text": "(Hayes, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The OT versus HG debate",
"sec_num": "2"
},
{
"text": "Furthermore, in HG any set of markedness constraints that can penalize a single segment should be akta ID(pl) AGR(pl) *g Flemming 2003) , can together derive the unattested pattern in figure 1b: pre-consonantal coronals are deleted only if the preceding vowel is back.",
"cite_spans": [
{
"start": 121,
"end": 135,
"text": "Flemming 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The OT versus HG debate",
"sec_num": "2"
},
{
"text": "w = 3 w = 2 w = 2 akta 1 2 atta 1 3 agda ID(pl) AGR(pl) *g w = 3 w = 2 w = 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The OT versus HG debate",
"sec_num": "2"
},
{
"text": "Many potential gang effects involving deletion are likely to be ruled out by independent principles. E.g. an alternative repair may be universally preferred due to a fixed ranking among faithfulness constraints (Steriade, 2008) . This cannot be the case in the current example because it is a variant of a well-attested process of cluster simplification. On this basis, we can make the generalization that HG predicts the existence of variants of attested deletion processes in which deletion applies only in the presence of additional constraint violations. This set includes many unattested processes.",
"cite_spans": [
{
"start": 211,
"end": 227,
"text": "(Steriade, 2008)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The OT versus HG debate",
"sec_num": "2"
},
{
"text": "Another general class of problematic predictions of HG concerns iterative processes in which a markedness constraint can motivate multiple violations of faithfulness. For example, if voicing assimilation is motivated by a constraint like AGREE(voice), then mappings like /agta/\u2192 [akta] and /agzta/\u2192[aksta] eliminate just one violation of AGREE(voice) at the cost of n \u2212 1 violations of IDENT(voice) with a cluster of n obstruents. In HG, the relative weighting of these two constraints establishes a maximum number of consonants that will undergo assimilation (a maximum of 1 in figure 1c) -an unattested phenomenon. In OT, the equivalent ranking derives unbounded assimilation because one violation of AGREE(voice) is worse than any number of violations of IDENT(voice).",
"cite_spans": [
{
"start": 279,
"end": 285,
"text": "[akta]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The OT versus HG debate",
"sec_num": "2"
},
{
"text": "Examples of gang effects have been posited by analysts (see Pater 2016 for a review), but alternative OT analyses have been proposed in a number of cases, as in the much discussed case of Japanese loanword devoicing (Pater, 2009; Kawahara, 2006) . On balance, the evidence for HG gang effects is weak compared to the evidence that they result in substantial typological over-generation, supporting OT's hypothesis of strict constraint domination.",
"cite_spans": [
{
"start": 216,
"end": 229,
"text": "(Pater, 2009;",
"ref_id": "BIBREF20"
},
{
"start": 230,
"end": 245,
"text": "Kawahara, 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The OT versus HG debate",
"sec_num": "2"
},
{
"text": "We turn now to results from statistical learning theory and bring them to bear on OT's hypothesis of strict domination. The presentation is kept informal with technical details relegated to the final appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The perspective of statistical learning",
"sec_num": "3"
},
{
"text": "The statistical learning framework of binary classification assumes a set of instances X and a set of labels Y = {+1, \u22121}. A classifier can then be construed as a function which assigns a label y = +1 or y = \u22121 to an instance x in the set X . We are interested in classifiers with a special shape, as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary classification",
"sec_num": "3.1"
},
{
"text": "We start with a collection H of functions h : X \u2192 [\u22121, +1] that take an instance and return a number between \u22121 and +1. Using the functions in H, we construct the collection F of all weighted sums f = K k=1 w k h k of an arbitrary finite number K of functions h k in H through some corresponding weights w k . We restrict ourself to weights which are non-negative and sum up to 1 (whereby F is the convex hull of H). A function h \u2208 H or a function f \u2208 F maps instances in X to numbers between \u22121 and +1. The sign of these numbers can in turn be interpreted as a classification label. Thus, sign(h) with h \u2208 H is called a basic classifier and sign(f ) with f = K k=1 w k h k \u2208 F is called a voting (or an ensemble) classifier, because it aggregates and averages the \"votes\" of the basic classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary classification",
"sec_num": "3.1"
},
{
"text": "We consider a probability distribution P on X \u00d7Y that generates labels from instances according to the conditional probability P(y|x). The generalization error Err P (f ) of a classifier f \u2208 F relative to P is the probability of misclassification of f , namely the probability under P of a labeled instance (x, y) such that f assigns to the instance x a label sign(f (x)) different from the intended label y:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary classification",
"sec_num": "3.1"
},
{
"text": "Err P (f ) = P sign(f (x)) = y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary classification",
"sec_num": "3.1"
},
{
"text": "As the generalization error measures the probability of misclassification, a classifier with a smaller generalization error is better than a classifier with a larger generalization error. The learner's ideal goal would be to find a classifier f \u2208 F with the smallest possible generalization error, that is a classifier which maps instances to their most probable label. Unfortunately, the generalization error Err P (\u2022) cannot be minimized directly, because it is defined in terms of the probability P which is unknown to the learner. Indeed, the learner only has at its disposal a training set T = ((x 1 , y 1 ), . . . , (x n , y n )) consisting of n labeled instances (x i , y i ) \u2208 X \u00d7 Y sampled independently according to P.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary classification",
"sec_num": "3.1"
},
{
"text": "The goal of statistical learning theory is to provide error bounds, that is bounds on the generalization error Err P (f ) of an arbitrary classifier f \u2208 F based on parameters such as the shape of f or its performance on the training set T . Of course, we want our error bounds to be as low as possible, thus providing guarantees for the smallest possible generalization error. In this section, we focus on a state-of-theart error bound due to Koltchinskii and Panchenko (2005, theorem 2 , page 1464; henceforth KP), recalled in appendix A.1. Sections 3.2 and 3.3 discuss the two crucial properties of KP's bound qualitatively. This will suffice to make a connection with OT's strict domination in section 3.4.",
"cite_spans": [
{
"start": 443,
"end": 486,
"text": "Koltchinskii and Panchenko (2005, theorem 2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Binary classification",
"sec_num": "3.1"
},
{
"text": "The condition sign(f (x i )) = y i that a voting classifier sign(f ) classifies correctly the data pair (x i , y i ) is equivalent to the inequality y i f (x i ) > 0. Thus, the size of the real number y i f (x i ) can be intuitively interpreted as the margin of confidence with which f succeeds at assigning the correct label y i to the instance x i : the larger y i f (x i ) is above zero, the larger the confidence. Given a training set T = ((x 1 , y 1 ), . . . , (x n , y n )) that f classifies correctly, we focus on the most dangerous training pair, namely the one that f classifies with the smallest confidence. That smallest margin of confidence is called the margin \u03b4 T (f ) of f on the training set T :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound depends on the margin",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b4 T (f ) = min i=1,...,n y i f (x i )",
"eq_num": "(1)"
}
],
"section": "KP's bound depends on the margin",
"sec_num": "3.2"
},
{
"text": "Since the margin \u03b4 T (f ) represents the worst-case confidence of f on the training set T , it is intuitive that KP's bound (like earlier bounds, since Schapire et al. 1998) depends on the margin in such a way that the error bound is large (that is, worse) when the margin \u03b4 T (f ) is small (namely close to 0). See appendix A.2 for details on the dependence of KP's bound on the margin. In conclusion, KP's bound says that, all else being equal, the learner should pick a classifier in F which correctly classifies the training set T with the largest margin \u03b4 T (f ). We will use this fact extensively in section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound depends on the margin",
"sec_num": "3.2"
},
{
"text": "Consider a representation of a voting classifier f \u2208 F as a sum of basic classifiers in H, namely f = K k=1 w k h k with non-negative weights w k which sum up to 1 and are therefore each smaller than 1. We assume without loss of generality that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound depends on the effective dimension",
"sec_num": "3.3"
},
{
"text": "w 1 \u2265 w 2 \u2265 \u2022 \u2022 \u2022 \u2265 w K .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound depends on the effective dimension",
"sec_num": "3.3"
},
{
"text": "Intuitively, the number K of basic classifiers in the representation of f can be interpreted as the dimension of f . Yet, the weights in the tail of the representation of f might be tiny whereby the corresponding basic classifiers contribute only little and should be discounted when determining the dimension of f . KP thus consider the alternative notion (2) of effective dimension d T (f ) of the classifier f . Intuitively, we split K as K = d + (K \u2212 d) and replace K \u2212 d with the sum K j=d+1 w j of the K \u2212 d weights in the tail, thus taking into account the smallness of the smallest weights. If the weights decrease fast, the tail weights will be small and the effective dimension d T (f ) will therefore be small.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound depends on the effective dimension",
"sec_num": "3.3"
},
{
"text": "d T (f ) = min 0\u2264d\u2264K \uf8ee \uf8f0 d + \uf8eb \uf8ed K j=d+1 w j \uf8f6 \uf8f8 2 2 log n \u03b4 T (f ) 2 \uf8f9 \uf8fb (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound depends on the effective dimension",
"sec_num": "3.3"
},
{
"text": "The novelty of KP's error bound is that it depends not only on the margin \u03b4 T (f ) of the classifier f but also on its effective dimension d T (f ) and thus on the decay of the weights in a representation of f . In the sense that (for a fixed margin) KP's bound is small (that is, better) when the effective dimension is small because of a fast decay of the weights. For instance, the bound is smaller for exponentially decaying weights than for polynomially decaying weights (assuming that the margin is the same in the two cases). See appendix A.3 for details on the dependence of KP's bound on the decay of the weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound depends on the effective dimension",
"sec_num": "3.3"
},
{
"text": "In conclusion, KP's error bound says that, all else being equal, the learner should pick a classifier in F which correctly classifies the training set T and whose weights decay fastest, possibly exponentially. We now make explicit the implications of this conclusion for the OT versus HG debate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound depends on the effective dimension",
"sec_num": "3.3"
},
{
"text": "The connection between the classification framework reviewed above and the framework of constraint-based phonology can be drawn as follows. Let the space of instances consist of triplets (u, s, s ) where u is an underlying form and s, s are corresponding candidate surface forms. We interpret s as the intended winner and s as the intended loser. The HG grammar relative to constraints C 1 , . . . , C K and weights w 1 , . . . , w K \u2265 0 is consistent with the triplet (u, s, s ) provided",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "K k=1 w k h k (u, s, s ) > 0 where h k (u, s, s ) is the constraint violation difference h k (u, s, s ) = C k (u, s ) \u2212 C k (u, s)",
"eq_num": "(3)"
}
],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "Without loss of generality, we assume the weights w k sum up to 1. Furthermore, we assume that there are a finite number of underlying forms and a finite number of surface forms (for discussion of this assumption, see Alber et al. 2015) . Thus, we can assume without loss of generality that",
"cite_spans": [
{
"start": 218,
"end": 236,
"text": "Alber et al. 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u22121 \u2264 h k (u, s, s ) \u2264 +1",
"eq_num": "(4)"
}
],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "for every triplet (u, s, s ). In fact, if the inequalities (4) fail for the original constraints, we can divide them by the largest number of constraint violations without affecting the typological predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "In conclusion, an HG grammar can be construed as a classifier f \u2208 F = conv(H) in the convex hull of the collection H of the constraint violation differences h k in (3) which take values in [\u22121, +1] by (4). The OT grammar relative to constraints C 1 , . . . , C K and a constraint ranking \u03c0 is consistent with the triplet (u, s, s ) provided there exists a constraint C k such that each of the constraints \u03c0-ranked above C k assigns the same number of violations to the two mappings (u, s) and (u, s ) while the constraint C k assigns less violations to the winner mapping (u, s) than to the loser mapping (u, s ). The following well known result says that the latter condition is equivalent to the HG consistency condition relative to exponentially decaying weights (Prince and Smolensky, 2004; Keller, 2000; Keller, 2005) . The constant Z in (5b) is arbitrary and can be used to normalize the weights.",
"cite_spans": [
{
"start": 766,
"end": 794,
"text": "(Prince and Smolensky, 2004;",
"ref_id": "BIBREF24"
},
{
"start": 795,
"end": 808,
"text": "Keller, 2000;",
"ref_id": "BIBREF8"
},
{
"start": 809,
"end": 822,
"text": "Keller, 2005)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "Theorem 1 Consider an arbitrary ranking \u03c0. Without loss of generality, assume that \u03c0 is (5a), whereby C 1 is ranked at the top, C 2 is ranked below it and so on, until the bottom ranked C K .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "a. C 1 | C 2 | . . . | C K b. w 1 = 1 Z \u2206+\u03b4 \u03b4 \u22121 w 2 = 1 Z \u2206+\u03b4 \u03b4 \u22122",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": ". . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w K = 1 Z \u2206+\u03b4 \u03b4 \u2212K",
"eq_num": "(5)"
}
],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "The HG grammar corresponding to the weights in (5b) for an arbitrary Z > 0 and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "\u2206 = max |h k (u, s, s )| k = 1 \u2022 \u2022 \u2022 K \u03b4 = min h k (u, s, s ) h k (u, s, s ) > 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "is consistent with a triplet (u, s, s ) if and only if the OT grammar corresponding to \u03c0 is.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "Theorem 1 says that OT's strict domination corresponds to a restriction to the subset of the HG typology corresponding to weights which decay exponentially, as in (5b). KP's bound provides a learnability rationale for this restriction: fast decaying weights ensure a smaller effective dimension (as long as the margin does not shrink) and thus a smaller (that is, better) error-bound. Thus, a learner of an OT grammar would have a better guarantee of a low generalization error, and we may conjecture that it will actually have a lower generalization error in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KP's bound and OT's strict domination",
"sec_num": "3.4"
},
{
"text": "To complement the theoretical perspective of section 3, we now turn to simulations of margin-based learning on two test cases. Our experiments found OT target grammars to be easier, on average, to learn than HG-non-OT ones. Furthermore, we found that this learning procedure yields weights with a lower effective dimension on OT targets than on HG-non-OT ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical simulations",
"sec_num": "4"
},
{
"text": "Our first test case is based on the analysis of rounding harmony by Kaun (2004) . It models progressive harmony between two vowels of the same backness. As it posits two levels of height and backness, it assumes 8 underlying forms consisting of one of 4 possible triggers (i.e., the four rounded vowels which differ for height and backness) and of one of 2 possible targets (the unrounded vowels of corresponding backness of both possible heights). Each underlying form has 2 candidate surface forms, one with harmony and one without. The constraint set consists of 7 constraints (see the online supplementary materials). The typology (computed with OT-Help2; Staubs et al. 2010) consists of 37 OT grammars and 26 HG-non-OT grammars.",
"cite_spans": [
{
"start": 68,
"end": 79,
"text": "Kaun (2004)",
"ref_id": "BIBREF6"
},
{
"start": 660,
"end": 679,
"text": "Staubs et al. 2010)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test cases",
"sec_num": "4.1"
},
{
"text": "Our second test case is based on the analysis of syllable structure by Prince and Smolensky (2004, Part II) . This analysis involves 5 constraints in its simpler variant. As in Bane and Riggle (2009) , the set of underlying forms consists of all 13 strings of length 1 to 3 of symbols in {C, V } (except CV which has only one possible output). Furthermore, we used their procedure to precompute all possibly optimal outputs, yielding a total of 56 surface forms. 1 The typology (computed with OT-Help2) consists of 12 OT and 13 HG-non-OT grammars.",
"cite_spans": [
{
"start": 71,
"end": 107,
"text": "Prince and Smolensky (2004, Part II)",
"ref_id": null
},
{
"start": 177,
"end": 199,
"text": "Bane and Riggle (2009)",
"ref_id": "BIBREF1"
},
{
"start": 463,
"end": 464,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test cases",
"sec_num": "4.1"
},
{
"text": "1 Note that what we call underlying and surface forms do not really correspond to actual forms but to patterns of constraint violations. For instance in our second test case, the underlying forms /tat/ and /bat/ are a single \"underlying form\" /CV/, and the surface forms (say for /tat/) [ta] and [da] are a single \"surface form\" [CV] . This means that the admittedly low number of data points we have should not be compared to the number of words human learners are exposed to; our data points exemplify all the possible patterns of small length for each phenomenon.",
"cite_spans": [
{
"start": 329,
"end": 333,
"text": "[CV]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test cases",
"sec_num": "4.1"
},
{
"text": "Algorithm 1 features the pseudo-code for our simulation procedure. For each grammar G in the typology, we build the set of instances X G in (6). We consider all triplets (u, s, s ) where: u is an underlying form; s is the corresponding winner surface form according to the grammar G; and s is a loser candidate for u different from s. We represent (u, s, s ) as the vector h(u, s, s ) whose components are the constraint violation differences h k (u, s, s ) in (3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "4.2"
},
{
"text": "X G = {x = h(u, s, s ) | G maps u to s} (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "4.2"
},
{
"text": "We sample a training set T by drawing uniformly with replacement n data points from X G (we assume all labels are equal to y = 1, because we only generate positive data). Based on the considerations in section 3.2, we compute the weights w * which maximise the empirical margin on the training set T over all non-negative weight vectors w \u2265 0. The margin (1) can be made explicit as in (7) in the specific case considered",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b4 T (w) = min{w T x | x \u2208 T }",
"eq_num": "(7)"
}
],
"section": "Procedure",
"sec_num": "4.2"
},
{
"text": "We do this for n ranging from 3 to an arbitrary number N . This procedure is repeated 250 times, so we can compute the average generalization error Err(n, G) that a margin-based learner trying to learn G makes after seeing n data points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "4.2"
},
{
"text": "Algorithm 1: Learning simulation procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "4.2"
},
{
"text": "1 for G in the typology do Figure 2 plots the error Err(n, G) averaged over target OT-grammars G (solid red lines) and averaged over target HG-non-OT grammars (dashed blue lines). We observe a learnability advantage for OT grammars in practice, as the generalization error of a margin-based learner on OT target grammars is lower for any given number n of data points than that of the same learner on HG-non-OT targets.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Procedure",
"sec_num": "4.2"
},
{
"text": "2 for n = 3, . . . , N do 3 for m = 1, . . . , 250 do 4 Randomly select T \u2208 X n G 5 w * \u2190 arg max w\u22650 \u03b4 T (w) 6 Err(m) \u2190 P(w * T x \u2264 0 | x \u2208 X G ) 7 Err(n, G) \u2190 1 250 m Err(m)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "4.2"
},
{
"text": "The error obtained in the simulations cannot be straightforwardly compared to Koltchinskii and Panchenko's error bound (8) , as we do not know the value of the constant K which appears in the bound. Yet, figure 3 shows a lower effective dimensionas defined in (2) -of the weights w * selected by the learner when trained on OT target grammars (red solid line) than on HG-non-OT targets (blue dotted line). Thus, we can speculate that the easier learnability of OT grammars compared to HG-non-OT grammars is related to the lower effective dimension of the HG weights that generate them.",
"cite_spans": [
{
"start": 78,
"end": 122,
"text": "Koltchinskii and Panchenko's error bound (8)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Of course, the advantage of OT that we observe on average could be due to just a couple of very \"easy\" OT grammars that drag the average down. For instance, in the case of harmony, both the grammar with systematic harmony and the one with no harmony at all only depend on only one constraint (respectively ALIGN-L/R([RD]) and DEP(LINK)) and both belong to the OT typology. Figure 4 thus plots the generalization error Err(n, G) for each individual OT (red dashed lines) and each individual HGnon-OT (blue dotted lines) target grammar G. The overall pattern is that most OT grammars are eas-ier to learn than most HG-non-OT grammars. In the case of syllable structure, there are indeed only a few exceptions to this general pattern. The pattern is admittedly somewhat less clear in the case of vowel harmony, as discussed below in section 5.A.",
"cite_spans": [],
"ref_spans": [
{
"start": 373,
"end": 381,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "This paper has argued that OT's strict domination seems to be warranted by phonological typology (section 2) and that strict domination might provide a learnability advantage (pace . This learnability argument is twofold: first, a review of recent results in the statistical learning literature (section 3) lets us conclude that learners of OT grammars will infer them with greater chance of success for the same amount of data. Second, simulation results on realistic test cases (section 4) show that OT target grammars are indeed easier to learn under certain assumptions. We conclude with various open issues that we would like to address in future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and open issues",
"sec_num": "5"
},
{
"text": "(A) As remarked above, figure 4 shows that several of the \"hardest\" grammars are part of the OT typology in the harmony case. As a tentative explanation, we note that in this test case, there are few underlying and surface forms, and many constraints, some of which are closely related. For instance, there are three different variants of ALIGN-L/R([RD]) for different features of the trigger vowel. Thus, in most grammars of the HG typology, not all constraints have to be active (in the sense of having non-zero weights). Certain OT grammars are harder than certain HG-non-OT grammars by virtue of requiring more active constraints. Future work will try to get a cleaner picture by comparing only OT and HG-non-OT grammars that require a comparable number of active constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and open issues",
"sec_num": "5"
},
{
"text": "(B) For consistency with the classification framework of section 3.1, the simulations described in section 4 define the error in terms of the number of triplets (u, s, s ) where the loser s incorrectly beats the winner s (see line 6 in algorithm 1). We might instead want to redefine the error in terms of the number of underlying forms u mapped to a winner different from s. For the results from statistical learning theory in section 3 to still be relevant, we would need to extend them from classifiers of the form f (x) = OT target grammar G (red line) and each HG-non-OT target grammar G (blue line).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and open issues",
"sec_num": "5"
},
{
"text": "k w k h k (x) to f (x) = min t\u2208S(x) k w k h k (x, t),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and open issues",
"sec_num": "5"
},
{
"text": "where S is a function from x to some finite set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and open issues",
"sec_num": "5"
},
{
"text": "(C) The simulations reported in section 4 assume a uniform distribution over triplets (u, s, s ) all consistent with some target grammar G. Future research will look at different data distributions (e.g., a Zipfian distribution over the underlying forms u) and the addition of some noise in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and open issues",
"sec_num": "5"
},
{
"text": "(D) The learner tested in section 4 simply looks for weights which maximize the margin but is oblivious to whether the target grammar is an OT or an HG-non-OT grammar. For OT targets, theorem 1 suggests the more specific learning strategy in algorithm 2. We consider each ranking \u03c0, construct the corresponding exponentially decaying weights w \u03c0 in (5), and determine the ranking \u03c0 * whose weights w \u03c0 * maximize the margin. We denote by Err OT (n, G) the average error of the OT grammar corresponding to \u03c0 * on the target grammar G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and open issues",
"sec_num": "5"
},
{
"text": "Err OT (n, G) is generally quite high when G is a HG-non-OT grammar. This is not surprising, since we're trying to learn a grammar outside the search space. Yet, figure 5 shows that even when the target grammar G is OT, Err OT (n, G) (red solid line) is slightly higher than the error Err(n, G) (dashed blue line) obtained with the general learning procedure in algorithm 1. This is puzzling, as one might have expected that the restriction of the search space in algorithm 2 should have led to a lower error. Towards a possible explanation, we observe that the weights w \u03c0 * obtained by algorithm 2 result in a very low margin, and thus a very high effective dimension compared to the weights w * obtained through algorithm 1, as shown in figure 6. Evidently, margin-based learning is incompatible with the strategy (5) for computing exponentiallydecaying weights corresponding to OT rankings. One possibility for future research is to base weights not on full rankings, but on RCD's (Tesar and Smolensky, 1998) hierarchy H 1 H 2 \u2022 \u2022 \u2022 (H 1 consists of constraints never loser preferring in T ; H 2 consists of constraints which are only loserpreferring on triplets (u, s, s ) of T where some constraint in H 1 is winner-preferring; and so on). For instance, one could pick weights w H so that the constraints in H 1 all have the same weight, the constraints in H 2 all have the same exponentially smaller weight, and so on. A strategy of this kind might reach a compromise between fast decay and large margin. Randomly select T \u2208 X n G 5",
"cite_spans": [
{
"start": 985,
"end": 1012,
"text": "(Tesar and Smolensky, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and open issues",
"sec_num": "5"
},
{
"text": "\u03c0 * \u2190 arg max \u03c0 \u03b4 T (w \u03c0 ) 6 Err OT (m) \u2190 P(w T \u03c0 * x \u2264 0|x \u2208 X G ) 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and open issues",
"sec_num": "5"
},
{
"text": "Err OT (n, G) \u2190 1 250 m Err OT (m)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and open issues",
"sec_num": "5"
},
{
"text": "Proceedings of the Society for Computation in Linguistics (SCiL) 2018, pages 1-11. Salt Lake City, Utah, January 4-7, 2018",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research reported in this paper was partially supported by the MIT France Seed Fund (project title: 'Phonological Typology and Learnability') and by the Agence National de la Recherche (project title: 'The mathematics of segmental phonotactics'). We thank an anonymous reviewer for helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Appendix: more details on KP's bound A.1 The exact formulation of Panchenko's (2005, theorem 2, p. 1464) error bound discussed in section 3 is as follows:Theorem 2 Suppose that H is a VC-subgraph class with VC-dimension V (see for instance Mohri et al. 2012) . Consider a voting classifier f = K k=1 w k h k \u2208 F = conv(H) which classifies correctly a training set T = ((x 1 , y 1 ), . . . , (x n , y n )) sampled i.i.d. according to a distribution P. For every t > 0, the generalization error Err P (f ) of f is bound as follows with probability at least 1 \u2212 e \u2212t :where K is a universal constant, \u03b4 T (f ) is the margin of the classifier f on the training set T defined in (1) and d T (f ) is its effective dimension defined in (2).A.2 Since K k=1 w k \u2264 1, the choice d = 0 in the definition (2) of the effective dimension yieldswhich decreases as 1/n when n \u2192 \u221e and increases as 1/\u03b4 2 when \u03b4 \u2192 0.A.3 The effective dimension d T (f ) which appears in KP's bound (8) depends on the decay of the weights in a representation of f . The following corollary (see Koltchinskii and Panchenko 2005 , example on p. 1465) details the dependence of the bound on the decay. The proof of the corollary is provided in the online supplement, based on class notes by Panchenko (2004, class 21) , as it has not appeared in the literature.Corollary 1 Consider a classifier f = K i=1 w k h k in F which classifies correctly a training set T = ((x 1 , y 1 \u2022 If the weights w k decay polynomially, i.e. w k \u2264 k \u2212B for some B > 1, KP's bound (8) becomes:where C B \u2192 1 as B \u2192 \u221e.\u2022 If the weights w k decay exponentially, namely w k \u2264 e \u2212k , KP's bound (8) becomes:The two bounds (10) and (11) decrease as 1/n when n \u2192 \u221e, just as in the general case (9) . The substantial improvement concerns the growth of the bound when \u03b4 \u2192 0. The general bound (9) grows as 1/\u03b4 2 when \u03b4 \u2192 0. The bound (10) for the case of polynomial decay instead grows only as 1/\u03b4 2/(2B\u22121) , which is slower than 1/\u03b4 2 because 2/(2B \u2212 1) \u2264 2 as B > 1. Furthermore, the bound (11) for the case of exponential decay grows only as log 1/\u03b4 when \u03b4 \u2192 0, which is substantially slower than 1/\u03b4 2 . When B \u2192 \u221e, the bound (10) for the case of polynomial decay becomes the bound (11) for the case of exponential decay.",
"cite_spans": [
{
"start": 66,
"end": 104,
"text": "Panchenko's (2005, theorem 2, p. 1464)",
"ref_id": null
},
{
"start": 240,
"end": 258,
"text": "Mohri et al. 2012)",
"ref_id": null
},
{
"start": 1059,
"end": 1090,
"text": "Koltchinskii and Panchenko 2005",
"ref_id": "BIBREF11"
},
{
"start": 1252,
"end": 1278,
"text": "Panchenko (2004, class 21)",
"ref_id": null
},
{
"start": 1726,
"end": 1729,
"text": "(9)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 1425,
"end": 1436,
"text": "((x 1 , y 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "From intensional properties to universal support. Universit\u00e0 degli Studi di Verona and Rutgers University",
"authors": [
{
"first": "Birgit",
"middle": [],
"last": "Alber",
"suffix": ""
},
{
"first": "Natalie",
"middle": [],
"last": "Delbusso",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Prince",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Birgit Alber, Natalie DelBusso, and Alan Prince. 2015. From intensional properties to universal support. Uni- versit\u00e0 degli Studi di Verona and Rutgers University.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Evaluating Strict Domination: The typological consequences of weighted constraints",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Bane",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Riggle",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 45th annual meeting of the Chicago Linguistics Society",
"volume": "",
"issue": "",
"pages": "13--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Bane and Jason Riggle. 2009. Evaluating Strict Domination: The typological consequences of weighted constraints. In Proceedings of the 45th an- nual meeting of the Chicago Linguistics Society, pages 13-27.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The VC dimension of constraint-based grammars",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Bane",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Riggle",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Sonderegger",
"suffix": ""
}
],
"year": 2010,
"venue": "Lingua",
"volume": "120",
"issue": "",
"pages": "1194--1208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Max Bane, Jason Riggle, and Morgan Sonderegger. 2010. The VC dimension of constraint-based gram- mars. Lingua, 120.5:1194-1208.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The relationship between coronal place and vowel backness",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Flemming",
"suffix": ""
}
],
"year": 2003,
"venue": "Phonology",
"volume": "20",
"issue": "",
"pages": "335--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Flemming. 2003. The relationship between coronal place and vowel backness. Phonology, 20:335-373.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Phonetically-driven phonology: the role of Optimality Theory and inductive grounding",
"authors": [
{
"first": "Bruce",
"middle": [],
"last": "Hayes",
"suffix": ""
}
],
"year": 1999,
"venue": "Functionalism and Formalism in Linguistics",
"volume": "1",
"issue": "",
"pages": "243--285",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruce Hayes. 1999. Phonetically-driven phonology: the role of Optimality Theory and inductive grounding. In Michael Darnell, Edith Moravscik, Michael Noonan, Frederick Newmeyer, and Kathleen Wheatly, editors, Functionalism and Formalism in Linguistics, volume 1: General Papers, pages 243-285. John Benjamins, Amsterdam.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Place assimilation",
"authors": [],
"year": 2004,
"venue": "Phonetically Based Phonology",
"volume": "",
"issue": "",
"pages": "58--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jongho Jun. 2004. Place assimilation. In B. Hayes, R. Kirchner, and D. Steriade, editors, Phonetically Based Phonology, pages 58-86. Cambridge University Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The typology of rounding harmony",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "Kaun",
"suffix": ""
}
],
"year": 2004,
"venue": "Phonetically based phonology",
"volume": "",
"issue": "",
"pages": "87--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail Kaun. 2004. The typology of rounding harmony. In Bruce Hayes, Robert Kirchner, and Donca Steriade, editors, Phonetically based phonology, pages 87-116. Cambridge University Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A faithfulness ranking projected from a perceptibility scale: The case of [+voice",
"authors": [
{
"first": "Shigeto",
"middle": [],
"last": "Kawahara",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "82",
"issue": "",
"pages": "536--574",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shigeto Kawahara. 2006. A faithfulness ranking pro- jected from a perceptibility scale: The case of [+voice] in Japanese. Language, 82:536-574.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Gradience in Grammar. Experimental and Computational Aspects of Degrees of Grammaticality",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Keller. 2000. Gradience in Grammar. Experimen- tal and Computational Aspects of Degrees of Gram- maticality. Ph.D. thesis, University of Edinburgh, England.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Linear Optimality Theory as a model of gradience in grammar",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2005,
"venue": "Gisbert Fanselow, Caroline F\u00e9ry, Ralph Vogel, and Matthias Schlesewsky",
"volume": "",
"issue": "",
"pages": "270--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Keller. 2005. Linear Optimality Theory as a model of gradience in grammar. In Gisbert Fanselow, Car- oline F\u00e9ry, Ralph Vogel, and Matthias Schlesewsky, editors, Gradience in Grammar: Generative Perspec- tives, pages 270-287. Oxford University Press, Ox- ford.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Empirical margin distributions and bounding the generalization error of combined classifiers",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Koltchinskii",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Panchenko",
"suffix": ""
}
],
"year": 2002,
"venue": "Ann. Statist",
"volume": "30",
"issue": "",
"pages": "1--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Koltchinskii and Dmitry Panchenko. 2002. Empirical margin distributions and bounding the gen- eralization error of combined classifiers. Ann. Statist., 30:1-50.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Complexities of convex combinations and bounding the generalization error in classification",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Koltchinskii",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Panchenko",
"suffix": ""
}
],
"year": 2005,
"venue": "Ann. Statist",
"volume": "33",
"issue": "",
"pages": "1455--1496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Koltchinskii and Dmitry Panchenko. 2005. Complexities of convex combinations and bounding the generalization error in classification. Ann. Statist., 33.4:1455-1496.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Generalization bounds for voting classifiers based on sparsity and clustering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Koltchinskii",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Panchenko",
"suffix": ""
},
{
"first": "Savina",
"middle": [],
"last": "Andonova",
"suffix": ""
}
],
"year": 2003,
"venue": "Lecture Notes in Artificial Intelligence 2777",
"volume": "",
"issue": "",
"pages": "492--505",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Koltchinskii, Dmitry Panchenko, and Savina Andonova. 2003a. Generalization bounds for voting classifiers based on sparsity and clustering. In Lecture Notes in Artificial Intelligence 2777, pages 492-505.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Koltchinskii",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Panchenko",
"suffix": ""
},
{
"first": "Lozano",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2003,
"venue": "Ann. Appl. Probab",
"volume": "13",
"issue": "",
"pages": "213--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Koltchinskii, Dmitry Panchenko, and Lozano. 2003b. Bounding the generalization error of convex combinations of classifiers: Balancing the dimension- ality and the margins. Ann. Appl. Probab., 13:213- 252.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Harmonic Grammar: A formal multi-level connectionist theory of linguistic wellformedness: An application",
"authors": [
{
"first": "G\u00e8raldine",
"middle": [],
"last": "Legendre",
"suffix": ""
},
{
"first": "Yoshiro",
"middle": [],
"last": "Miyata",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive Science Society",
"volume": "12",
"issue": "",
"pages": "884--891",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00e8raldine Legendre, Yoshiro Miyata, and Paul Smolen- sky. 1990a. Harmonic Grammar: A formal multi-level connectionist theory of linguistic well- formedness: An application. In Morton Ann Gerns- bacher and Sharon J. Derry, editors, Annual conference of the Cognitive Science Society 12, pages 884-891, Mahwah, New Jersey. Lawrence Erlbaum Associates.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Harmonic Grammar: A formal multi-level connectionist theory of linguistic wellformedness: Theoretical foundations",
"authors": [
{
"first": "G\u00e9raldine",
"middle": [],
"last": "Legendre",
"suffix": ""
},
{
"first": "Yoshiro",
"middle": [],
"last": "Miyata",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive Science Society",
"volume": "12",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00e9raldine Legendre, Yoshiro Miyata, and Paul Smolen- sky. 1990b. Harmonic Grammar: A formal multi-level connectionist theory of linguistic well- formedness: Theoretical foundations. In Morton Ann Gernsbacher and Sharon J. Derry, editors, Annual con- ference of the Cognitive Science Society 12, pages 388-395, Mahwah, NJ. Lawrence Erlbaum.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The optimality theory/harmonic grammar connection",
"authors": [
{
"first": "G\u00e8raldine",
"middle": [],
"last": "Legendre",
"suffix": ""
},
{
"first": "Antonella",
"middle": [],
"last": "Sorace",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 2006,
"venue": "The Harmonic Mind",
"volume": "",
"issue": "",
"pages": "903--966",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00e8raldine Legendre, Antonella Sorace, and Paul Smolen- sky. 2006. The optimality theory/harmonic grammar connection. In Paul Smolensky and G\u00e8raldine Legen- dre, editors, The Harmonic Mind, pages 903-966. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Afshin Rostamizadeh, and Ameet Talwalkar. 2012. Foundations of Machine Learning",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri, Afshin Rostamizadeh, and Ameet Tal- walkar. 2012. Foundations of Machine Learning. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The origin of sound patterns in vocal tract constraints",
"authors": [
{
"first": "John",
"middle": [
"J"
],
"last": "Ohala",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "189--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John J. Ohala. 1983. The origin of sound patterns in vocal tract constraints. In Peter F. MacNeilage, editor, The production of speech, pages 189-216. Springer- Verlag, New York.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Statistical learning theory. Lecture notes for the class 18",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Panchenko",
"suffix": ""
}
],
"year": 2004,
"venue": "Topics in Statistics), Department of Mathematics",
"volume": "465",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Panchenko. 2004. Statistical learning theory. Lecture notes for the class 18.465 (Topics in Statis- tics), Department of Mathematics, MIT.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Weighted constraints in Generative Linguistics",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Pater",
"suffix": ""
}
],
"year": 2009,
"venue": "Cognitive Science",
"volume": "33",
"issue": "",
"pages": "999--1035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe Pater. 2009. Weighted constraints in Generative Lin- guistics. Cognitive Science, 33:999-1035.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Universal grammar with weighted constraints",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Pater",
"suffix": ""
}
],
"year": 2016,
"venue": "Harmonic Grammar and Harmonic Serialism",
"volume": "",
"issue": "",
"pages": "1--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe Pater. 2016. Universal grammar with weighted con- straints. In Joe Pater and John J. McCarthy, editors, Harmonic Grammar and Harmonic Serialism, pages 1-46. Equinox, London.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Harmonic Grammar with Linear Programming: From linear systems to linguistic typology",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Pater",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Jesney",
"suffix": ""
},
{
"first": "Rajesh",
"middle": [],
"last": "Bhatt",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Becker",
"suffix": ""
}
],
"year": 2010,
"venue": "Phonology",
"volume": "27",
"issue": "1",
"pages": "1--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Potts, Joe Pater, Karen Jesney, Rajesh Bhatt, and Michael Becker. 2010. Harmonic Grammar with Linear Programming: From linear systems to linguis- tic typology. Phonology, 27(1):1-41.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Optimality: From neural networks to universal grammar",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Prince",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 1997,
"venue": "Science",
"volume": "275",
"issue": "",
"pages": "1604--1610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Prince and Paul Smolensky. 1997. Optimality: From neural networks to universal grammar. Science, 275:1604-1610.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Optimality Theory: Constraint Interaction in generative grammar",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Prince",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Prince and Paul Smolensky. 2004. Optimality The- ory: Constraint Interaction in generative grammar.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The complexity of ranking hypotheses in Optimality Theory. Computational Linguistics",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Riggle",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "35",
"issue": "",
"pages": "47--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Riggle. 2009. The complexity of ranking hypothe- ses in Optimality Theory. Computational Linguistics, 35(1):47-59.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Boosting the margin: a new explanation for the effectiveness of voting methods. The Annals of Statistics",
"authors": [
{
"first": "Robert",
"middle": [
"E"
],
"last": "Shapire",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Freund",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Bartlett",
"suffix": ""
},
{
"first": "Wee Sun",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "26",
"issue": "",
"pages": "1651--1686",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert E. Shapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. 1998. Boosting the margin: a new ex- planation for the effectiveness of voting methods. The Annals of Statistics, 26.5:1651-1686.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The Harmonic Mind",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Smolensky",
"suffix": ""
},
{
"first": "G\u00e8raldine",
"middle": [],
"last": "Legendre",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Smolensky and G\u00e8raldine Legendre. 2006. The Harmonic Mind. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "OT-Help 2.0. Software package. Software Package. University of Massachussetts",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Staubs",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Becker",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pratt",
"suffix": ""
},
{
"first": "John",
"middle": [
"J"
],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Pater",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Staubs, Michael Becker, Christopher Potts, Patrick Pratt, John J. McCarthy, and Joe Pater. 2010. OT-Help 2.0. Software package. Software Package. University of Massachussetts, Amherst.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The phonology of perceptibility effects: the P-map and its consequences for constraint organization",
"authors": [
{
"first": "Donca",
"middle": [],
"last": "Steriade",
"suffix": ""
}
],
"year": 2008,
"venue": "The nature of the word: essays in honor of Paul Kiparsky",
"volume": "",
"issue": "",
"pages": "151--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donca Steriade. 2008. The phonology of perceptibil- ity effects: the P-map and its consequences for con- straint organization. In Kristin Hanson and Sharon Inkelas, editors, The nature of the word: essays in honor of Paul Kiparsky, pages 151-179. MIT Press, Cambridge.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "On the uniform convergence of relative frequencies of events to their probabilities",
"authors": [
{
"first": "N",
"middle": [],
"last": "Vladimir",
"suffix": ""
},
{
"first": "Alexey",
"middle": [
"Y"
],
"last": "Vapnik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chervonenkis",
"suffix": ""
}
],
"year": 1971,
"venue": "Theory of Probability and its Applications",
"volume": "16",
"issue": "",
"pages": "264--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir N. Vapnik and Alexey Y. Chervonenkis. 1971. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264-280.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Average of the generalization error Err(n, G) over OT and over HG-non-OT target grammars as a function of n, for rounding harmony (left) and syllable types (right) data.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Average effective dimension of the learner's weights over OT and HG-non-OT target grammars as a function of n, for rounding harmony (left) and syllable types (right) data.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Generalization error Err(n, G) as a function of n, for rounding harmony (left) and syllable types (right) data, for each",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Average over OT target grammars of the generalization errors Err(n, G) in algorithm 1 and ErrOT(n, G) in algorithm 2, for harmony (left) and syllable types (right) data. Effective dimension of the weights in algorithm 1 averaged over OT and over HG-non-OT target grammars; effective dimension of the weights in algorithm 2 averaged over OT target grammars.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Learning simulation for OT targets. 1 for G in the typology do 2 for n = 3, . . . , N do 3 for m = 1, . . . , 250 do 4",
"uris": null,
"num": null,
"type_str": "figure"
}
}
}
}