| { |
| "paper_id": "D16-1045", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:37:12.818532Z" |
| }, |
| "title": "The Structured Weighted Violations Perceptron Algorithm", |
| "authors": [ |
| { |
| "first": "Rotem", |
| "middle": [], |
| "last": "Dror", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present the Structured Weighted Violations Perceptron (SWVP) algorithm, a new structured prediction algorithm that generalizes the Collins Structured Perceptron (CSP, (Collins, 2002)). Unlike CSP, the update rule of SWVP explicitly exploits the internal structure of the predicted labels. We prove the convergence of SWVP for linearly separable training sets, provide mistake and generalization bounds, and show that in the general case these bounds are tighter than those of the CSP special case. In synthetic data experiments with data drawn from an HMM, various variants of SWVP substantially outperform its CSP special case. SWVP also provides encouraging initial dependency parsing results.", |
| "pdf_parse": { |
| "paper_id": "D16-1045", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present the Structured Weighted Violations Perceptron (SWVP) algorithm, a new structured prediction algorithm that generalizes the Collins Structured Perceptron (CSP, (Collins, 2002)). Unlike CSP, the update rule of SWVP explicitly exploits the internal structure of the predicted labels. We prove the convergence of SWVP for linearly separable training sets, provide mistake and generalization bounds, and show that in the general case these bounds are tighter than those of the CSP special case. In synthetic data experiments with data drawn from an HMM, various variants of SWVP substantially outperform its CSP special case. SWVP also provides encouraging initial dependency parsing results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The structured perceptron ( (Collins, 2002) , henceforth denoted CSP) is a prominent training algorithm for structured prediction models in NLP, due to its effective parameter estimation and simple implementation. It has been utilized in numerous NLP applications including word segmentation and POS tagging (Zhang and Clark, 2008) , dependency parsing (Koo and Collins, 2010; Goldberg and Elhadad, 2010; Martins et al., 2013) , semantic parsing (Zettlemoyer and Collins, 2007) and information extraction (Hoffmann et al., 2011; Reichart and Barzilay, 2012) , if to name just a few.", |
| "cite_spans": [ |
| { |
| "start": 28, |
| "end": 43, |
| "text": "(Collins, 2002)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 308, |
| "end": 331, |
| "text": "(Zhang and Clark, 2008)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 353, |
| "end": 376, |
| "text": "(Koo and Collins, 2010;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 377, |
| "end": 404, |
| "text": "Goldberg and Elhadad, 2010;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 405, |
| "end": 426, |
| "text": "Martins et al., 2013)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 463, |
| "end": 477, |
| "text": "Collins, 2007)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 505, |
| "end": 528, |
| "text": "(Hoffmann et al., 2011;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 529, |
| "end": 557, |
| "text": "Reichart and Barzilay, 2012)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Like some training algorithms in structured prediction (e.g. structured SVM (Taskar et al., 2004; Tsochantaridis et al., 2005) , MIRA (Crammer and Singer, 2003) and LaSo (Daum\u00e9 III and Marcu, 2005 )), CSP considers in its update rule the difference between complete predicted and gold standard labels (Sec. 2). Unlike others (e.g. factored MIRA (McDonald et al., 2005b; McDonald et al., 2005a) and dual-loss based methods (Meshi et al., 2010) ) it does not exploit the structure of the predicted label. This may result in valuable information being lost.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 97, |
| "text": "(Taskar et al., 2004;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 98, |
| "end": 126, |
| "text": "Tsochantaridis et al., 2005)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 134, |
| "end": 160, |
| "text": "(Crammer and Singer, 2003)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 170, |
| "end": 196, |
| "text": "(Daum\u00e9 III and Marcu, 2005", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 345, |
| "end": 369, |
| "text": "(McDonald et al., 2005b;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 370, |
| "end": 393, |
| "text": "McDonald et al., 2005a)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 422, |
| "end": 442, |
| "text": "(Meshi et al., 2010)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Consider, for example, the gold and predicted dependency trees of Figure 1 . The substantial difference between the trees may be mostly due to the difference in roots (are and worse, respectively). Parameter update w.r.t this mistake may thus be more useful than an update w.r.t the complete trees.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 66, |
| "end": 74, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work we present a new perceptron algorithm with an update rule that exploits the structure of a predicted label when it differs from the gold label (Section 3). Our algorithm is called The Structured Weighted Violations Perceptron (SWVP) as its update rule is based on a weighted sum of updates w.r.t violating assignments and non-violating assignments: assignments to the input example, derived from the predicted label, that score higher (for violations) and lower (for non-violations) than the gold standard label according to the current model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our concept of violating assignment is based on Huang et al. (2012) that presented a variant of the CSP algorithm where the argmax inference problem is replaced with a violation finding function. Their update rule, however, is identical to that of the CSP algorithm. Importantly, although CSP and the above variant do not exploit the internal structure of the predicted label, they are special cases of SWVP.", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 67, |
| "text": "Huang et al. (2012)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In Section 4 we prove that for a linearly separable training set, SWVP converges to a linear separator of the data under certain conditions on the parameters of the algorithm, that are respected by the CSP special case. We further prove mistake and generalization bounds for SWVP, and show that in the general case the SWVP bounds are tighter than the CSP's.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In Section 5 we show that SWVP allows aggressive updates, that exploit only violating assignments derived from the predicted label, and more balanced updates, that exploit both violating and non-violating assignments. In experiments with synthetic data generated by an HMM, we demonstrate that various SWVP variants substantially outperform CSP training. We also provide initial encouraging dependency parsing results, indicating the potential of SWVP for real world NLP applications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In structured prediction the task is to find a mapping f : X \u2192 Y, where y \u2208 Y is a structured object rather than a scalar, and a feature mapping \u03c6(x, y) :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Collins Structured Perceptron", |
| "sec_num": "2" |
| }, |
| { |
| "text": "X \u00d7 Y(x) \u2192 R d is given. In this work we denote Y(x) = {y |y \u2208 D Y Lx },", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Collins Structured Perceptron", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where L x , a scalar, is the size of the allowed output sequence for an input x and D Y is the domain of y i for every i \u2208 {1, . . . L x }. 1 Our results, however, hold for the general case of an output space with variable size vectors as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Collins Structured Perceptron", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The CSP algorithm (Algorithm 1) aims to learn a parameter (or weight) vector w \u2208 R d , that separates the training data, i.e. for each training example (x, y) it holds that: y = arg max y \u2208Y(x) w \u2022 \u03c6(x, y ). To find such a vector the algorithm iterates over the training set examples and solves the above inference (argmax) problem. If the inferred label y * differs from the gold label y the update w = w + \u2206\u03c6(x, y, y * ) is performed. For linearly separable training data (see definition 4), CSP is proved to converge to a vector w separating the training data. Collins and Roark (2004) and Huang et al. (2012) expanded the CSP algorithm by proposing various alternatives to the argmax inference problem which is often intractable in structured prediction problems (e.g. in high-order graph-based dependency parsing (McDonald and Pereira, 2006) ). The basic idea is replacing the argmax problem with the search for a violation: an output label that the model scores higher Algorithm 1 The Structured Perceptron (CSP)", |
| "cite_spans": [ |
| { |
| "start": 564, |
| "end": 588, |
| "text": "Collins and Roark (2004)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 593, |
| "end": 612, |
| "text": "Huang et al. (2012)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 818, |
| "end": 846, |
| "text": "(McDonald and Pereira, 2006)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Collins Structured Perceptron", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Input: data D = {x i , y i } n i=1 , feature mapping \u03c6 Output: parameter vector w \u2208 R d Define: \u2206\u03c6(x, y, z) \u03c6(x, y) \u2212 \u03c6(x, z) 1: Initialize w = 0. 2: repeat 3: for each (x i , y i ) \u2208 D do 4: y * = arg max y \u2208Y(x i ) w \u2022 \u03c6(x i , y ) 5: if y * = y i then 6: w = w + \u2206\u03c6(x i , y i , y * ) 7:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Collins Structured Perceptron", |
| "sec_num": "2" |
| }, |
| { |
| "text": "end if 8: end for 9: until Convergence than the gold standard label. The update rule in these CSP variants is, however, identical to the CSP's. We, in contrast, propose a novel update rule that exploits the internal structure of the model's prediction regardless of the way this prediction is generated.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Collins Structured Perceptron", |
| "sec_num": "2" |
| }, |
| { |
| "text": "SWVP exploits the internal structure of a predicted label y * = y for a training example (x, y) \u2208 D, by updating the weight vector with respect to substructures of y * . We start by presenting the fundamental concepts at the basis of our algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Structured Weighted Violations Perceptron (SWVP)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Sub-structure Sets We start with two fundamental definitions: (1) An individual sub-structure of a structured object (or label) y \u2208 D Y Lx , denoted with J, is defined to be a subset of indexes J \u2286 [L x ]; 2 and (2) A set of substructures for a training example (x, y), denoted with JJ x , is defined as JJ x \u2286 2 [Lx] .", |
| "cite_spans": [ |
| { |
| "start": 313, |
| "end": 317, |
| "text": "[Lx]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Mixed Assignment We next define the concept of a mixed assignment: Definition 1. For a training pair (x, y) and a predicted label y * \u2208 Y(x), y * = y, a mixed assignment (M A) vector denoted as m J (y * , y) is defined with respect to J \u2208 JJ x as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "m J k (y * , y) = y * k k \u2208 J y k else", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "That is, a mixed assignment is a new label, derived from the predicted label y * , that is identical to y * in all indexes in J and to y otherwise. For simplicity we denote m J (y * , y) = m J when the reference y * and y labels are clear from the context. Consider, for example, the trees of Figure 1 , assuming that the top tree is y, the middle tree is y * and J = [2, 5] . 3 In the m J (y * , y) (bottom) tree the heads of all the words are identical to those of the top tree, except for the heads of mistakes and of then.", |
| "cite_spans": [ |
| { |
| "start": 368, |
| "end": 371, |
| "text": "[2,", |
| "ref_id": null |
| }, |
| { |
| "start": 372, |
| "end": 374, |
| "text": "5]", |
| "ref_id": null |
| }, |
| { |
| "start": 377, |
| "end": 378, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 293, |
| "end": 301, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Violation The next central concept is that of a violation, originally presented by Huang et al. (2012) :", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 102, |
| "text": "Huang et al. (2012)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Definition 2. A triple (x, y, y * ) is said to be a violation with respect to a training example (x, y) and a parameter vector w if for y * \u2208 Y(x) it holds that y * = y and w \u2022 \u2206\u03c6(x, y, y * ) \u2264 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The SWVP algorithm distinguishes between M As that are violations, and ones that are not. For a triplet (x, y, y * ) and a set of substructures JJ x \u2286 2 [Lx] we provide the following notations:", |
| "cite_spans": [ |
| { |
| "start": 153, |
| "end": 157, |
| "text": "[Lx]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "I(y * , y, JJx) v = {J \u2208 JJx|m J = y, w\u2022\u2206\u03c6(x, y, m J ) \u2264 0} I(y * , y, JJx) nv = {J \u2208 JJx|m J = y, w\u2022\u2206\u03c6(x, y, m J ) > 0}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "This notation divides the set of substructures into two subsets, one consisting of the substructures that yield violating MAs and one consisting of the substructures that yield non-violating MAs. Here again when the reference label y * and the set JJ x are known we denote:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "I(y * , y, JJ x ) v = I v , I(y * , y, JJ x ) nv = I nv and I = I v \u222a I nv .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Weighted Violations The key idea of SWVP is the exploitation of the internal structure of the predicted label in the update rule. For this aim at each iteration we define the set of substructures, JJ x , and then, for each J \u2208 JJ x , update the parameter vector, w, with respect to the mixed assignments, M A J 's. This is a more flexible setup compared to CSP, as we can update with respect to the predicted output (if it is a violation, as is promised if inference is performed via argmax), if we wish to do so, as well as with respect to other mixed assignments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Naturally, not all mixed assignments are equally important for the update rule. Hence, we weigh the different updates using a weight vector \u03b3. This paper therefore extends the observation of Huang et al. (2012) that perceptron parameter update can be performed w.r.t violations (Section 2), by showing that w can actually be updated w.r.t linear combinations of mixed assignments, under certain conditions on the selected weights. 3 We index the dependency tree words from 1 onwards. Some mistakes are worse than others.", |
| "cite_spans": [ |
| { |
| "start": 191, |
| "end": 210, |
| "text": "Huang et al. (2012)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 431, |
| "end": 432, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Some mistakes are worse than others.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Some mistakes are worse than others. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Concepts", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "With these definitions we can present the SWVP algorithm (Algorithm 2). SWVP is in fact a family of algorithms differing with respect to two decisions that can be made at each pass over each training example (x, y): the choice of the set JJ x and the implementation of the SETGAMMA function.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "SWVP is very similar to CSP except for in the update rule. Like in CSP, the algorithm iterates over the training data examples and for each example it first predicts a label according to the current parameter vector w (inference is discussed in Section 4.2, property 2). The main difference from CSP is in the update rule (lines 6-12). Here, for each substructure in the substructure set, J \u2208 JJ x , the algorithm generates a mixed assignment m J (lines 7-9). Then, w is updated with a weighted sum of the mixed assignments (line 11), unlike in CSP where the update is held w.r.t the predicted assignment only.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The \u03b3(m J ) weights assigned to each of the \u2206\u03c6(x, y, m J ) updates are defined by a SETGAMMA function (line 10). Intuitively, a \u03b3(m J ) weight should be higher the more the mixed assignment is assumed to convey useful information that can guide the update of w in the right direction. In Section 4 we detail the conditions on SETGAMMA under which SWVP converges, and in Section 5 we describe various SETGAMMA implementations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Going back to the example of Figure 1 , one would assume (Sec. 1) that the head word prediction for worse is pivotal to the substantial difference between the two top trees (UAS of 0.2). CSP does not directly exploit this observation as it only updates its parameter vector with respect to the differences between complete assignments: w = w + \u2206\u03c6(x, y, z).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 29, |
| "end": 37, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In contrast, SWVP can exploit this observation in various ways. For example, it can generate a mixed assignment for each of the erroneous arcs where all other words are assigned their correct arc (according to the gold tree) except for that specific arc which is kept as in the bottom tree. Then, higher weights can be assigned to errors that seem more central than others. We elaborate on this in the next two sections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Input: data D = {x i , y i } n i=1 , feature mapping \u03c6 Output: parameter vector w \u2208 R d Define: \u2206\u03c6(x, y, z) \u03c6(x, y) \u2212 \u03c6(x, z) 1: Initialize w = 0. 2: repeat 3: for each (x i , y i ) \u2208 D do 4: y * = arg max y \u2208Y(x i ) w \u2022 \u03c6(x i , y ) 5: if y * = y i then 6: Define: JJ x i \u2286 2 [L x i ] 7: for J \u2208 JJ x i do 8: Define: m J s.t. m J k = y * k k \u2208 J y i k else 9:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 2 The Structured Weighted Violations Perceptron", |
| "sec_num": null |
| }, |
| { |
| "text": "end for 10: () 11:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 2 The Structured Weighted Violations Perceptron", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03b3 = SETGAMMA", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 2 The Structured Weighted Violations Perceptron", |
| "sec_num": null |
| }, |
| { |
| "text": "w = w + J\u2208I v \u222aI nv \u03b3(m J )\u2206\u03c6(x i , y i , m J )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 2 The Structured Weighted Violations Perceptron", |
| "sec_num": null |
| }, |
| { |
| "text": "12:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 2 The Structured Weighted Violations Perceptron", |
| "sec_num": null |
| }, |
| { |
| "text": "end if 13: end for 14: until Convergence", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 2 The Structured Weighted Violations Perceptron", |
| "sec_num": null |
| }, |
| { |
| "text": "We start this section with the convergence conditions on the \u03b3 vector which weighs the mixed assignment updates in the SWVP update rule (line 11). Then, using these conditions, we describe the relation between the SWVP and the CSP algorithms. After that, we prove the convergence of SWVP and analyse the derived properties of the algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theory", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u03b3 Selection Conditions Our main observation in this section is that SWVP converges under two conditions: (a) the training set D is linearly separable; and (b) for any parameter vector w achievable by the algorithm, there exists (x, y) \u2208 D with JJ x \u2286 2 [Lx] , such that for the predicted output y * = y, SETGAMMA returns a \u03b3 weight vector that respects the \u03b3 selection conditions defined as follows: Definition 3. The \u03b3 selection conditions for the SWVP algorithm are (I = I v \u222a I nv ):", |
| "cite_spans": [ |
| { |
| "start": 253, |
| "end": 257, |
| "text": "[Lx]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theory", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(1) J\u2208I \u03b3(m J ) = 1. \u03b3(m J ) \u2265 0, \u2200J \u2208 I. (2) w \u2022 J\u2208I \u03b3(m J )\u2206\u03c6(x i , y i , m J ) \u2264 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theory", |
| "sec_num": "4" |
| }, |
| { |
| "text": "With this definition we are ready to prove the following property.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theory", |
| "sec_num": "4" |
| }, |
| { |
| "text": "SWVP Generalizes the CSP Algorithm We now show that the CSP algorithm is a special case of SWVP. CSP can be derived from SWVP when taking: JJ x = {[L x ]}, and \u03b3(m [Lx] ) = 1 for every (x, y) \u2208 D. With these parameters, the \u03b3 selection conditions hold for every w and y * . Condition (1) holds trivially as there is only one \u03b3 coefficient and it is equal to 1. Condition (2) holds as y * = m [Lx] and", |
| "cite_spans": [ |
| { |
| "start": 164, |
| "end": 168, |
| "text": "[Lx]", |
| "ref_id": null |
| }, |
| { |
| "start": 392, |
| "end": 396, |
| "text": "[Lx]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theory", |
| "sec_num": "4" |
| }, |
| { |
| "text": "hence I = {[L x ]} and w \u2022 J\u2208I \u2206\u03c6(x, y, m J ) \u2264 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theory", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Here we give the theorem regarding the convergence of the SWVP in the separable case. We first define:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Definition 4. A data set D = {x i , y i } n i=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "is linearly separable with margin \u03b4 > 0 if there exists some vector u with u 2 = 1 such that for all i:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "u \u2022 \u2206\u03c6(x i , y i , z) \u2265 \u03b4, \u2200z \u2208 Y(x i ). Definition 5. The radius of a data set D = {x i , y i } n i=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "is the minimal scalar R s.t for all i:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2206\u03c6(x i , y i , z) \u2264 R, \u2200z \u2208 Y(x i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We next extend these definitions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Definition 6. Given a data set D = {x i , y i } n i=1 and a set JJ = {JJ x i \u2286 2 [L x i ] |(x i , y i ) \u2208 D}, D", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "is linearly separable w.r.t JJ, with margin \u03b4 JJ > 0 if there exists a vector u with u 2 = 1 such that:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "u \u2022 \u2206\u03c6(x i , y i , m J (z, y i )) \u2265 \u03b4 JJ for all i, z \u2208 Y(x i ), J \u2208 JJ x i . Definition 7. The mixed assignment radius w.r.t JJ of a data set D = {x i , y i } n i=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "is a constant R JJ s.t for all i it holds that:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2206\u03c6(x i , y i , m J (z, y i )) \u2264 R JJ , \u2200z \u2208 Y(x i ), J \u2208 JJ x i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "With these definitions we can make the following observation (proof in A): Observation 1. For linearly separable data D and a set JJ, every unit vector u that separates the data with margin \u03b4, also separates the data with respect to mixed assignments with JJ, with margin \u03b4 JJ \u2265 \u03b4. Likewise, it holds that R JJ \u2264 R.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We can now state our convergence theorem. While the proof of this theorem resembles that of the CSP (Collins, 2002) , unlike the CSP proof the SWVP proof relies on the \u03b3 selection conditions presented above and on the Jensen inequality.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 115, |
| "text": "(Collins, 2002)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Theorem 1. For any dataset D, linearly separable with respect to JJ with margin \u03b4 JJ > 0, the SWVP algorithm terminates after t \u2264 (R JJ ) 2 (\u03b4 JJ ) 2 steps, where R JJ is the mixed assignment radius of D w.r.t. JJ.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Proof. Let w t be the weight vector before the t th update, thus w 1 = 0. Suppose the t th update occurs on example (x, y), i.e. for the predicted output y * it holds that y * = y. We will bound w t+1 2 from both sides. First, it follows from the update rule of the algorithm that:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "w t+1 = w t + J\u2208I v \u222aI nv \u03b3(m J )\u2206\u03c6(x, y, m J ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For simplicity, in this proof we will use the notation I v \u222a I nv = I. Hence, multiplying each side of the equation by u yields:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "u \u2022 w t+1 = u \u2022 w t + u \u2022 J\u2208I \u03b3(m J )\u2206\u03c6(x, y, m J ) = u \u2022 w t + J\u2208I \u03b3(m J )u \u2022 \u2206\u03c6(x, y, m J ) \u2265 u \u2022 w t + J\u2208I \u03b3(m J )\u03b4 JJ (margin property) \u2265 u \u2022 w t + \u03b4 JJ \u2265 . . . \u2265 t\u03b4 JJ .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The last inequality holds because J\u2208I \u03b3(m J ) = 1. From this we get that", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "w t+1 2 \u2265 (\u03b4 JJ ) 2 t 2 since u =1. Second, w t+1 2 = w t + J\u2208I \u03b3(m J )\u2206\u03c6(x, y, m J ) 2 = w t 2 + J\u2208I \u03b3(m J )\u2206\u03a6(x, y, m J ) 2 + 2w t \u2022 J\u2208I \u03b3(m J )\u2206\u03a6(x, y, m J ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "From \u03b3 selection condition (2) we get that:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "w t+1 2 \u2264 w t 2 + J\u2208I \u03b3(m J )\u2206\u03a6(x, y, m J ) 2 \u2264 w t 2 + J\u2208I \u03b3(m J ) \u2206\u03a6(x, y, m J ) 2 \u2264 w t 2 + (R JJ ) 2 . (radius property)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The inequality one before the last results from the Jensen inequality which holds due to (a) \u03b3 selection condition (1); and (b) the squared norm function being convex. From this we finally get:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "w t+1 2 \u2264 w t 2 + (R JJ ) 2 \u2264 . . . \u2264 t(R JJ ) 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Combining the two steps we get:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(\u03b4 JJ ) 2 t 2 \u2264 w t+1 2 \u2264 t(R JJ ) 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "From this it is easy to derive the upper bound in the theorem:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "t \u2264 (R JJ ) 2 (\u03b4 JJ ) 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence for Linearly Separable Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We next point on three properties of the SWVP algorithm, derived from its convergence proof:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence Properties", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Property 1 (tighter iterations bound) The convergence proof of CSP (Collins, 2002) is given for a vector u that linearly separates the data, with margin \u03b4 and for a data radius R. Following observation 1, it holds that in our case, u also linearly separates the data with respect to mixed assignments with a set JJ and with margin \u03b4 JJ \u2265 \u03b4. Together with the definition of R JJ \u2264 R we get that:", |
| "cite_spans": [ |
| { |
| "start": 67, |
| "end": 82, |
| "text": "(Collins, 2002)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence Properties", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "(R JJ ) 2 (\u03b4 JJ ) 2 \u2264 R 2 \u03b4 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence Properties", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": ". This means that the bound on the number of updates made by SWVP is tighter than the bound of CSP.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence Properties", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Property 2 (inference) From the \u03b3 selection conditions it holds that any label from which at least one violating MA can be derived through JJ x is suitable for an update. This is because in such a case we can choose, for example, a SETGAMMA function that assigns the weight of 1 to that MA, and the weight of 0 to all other MAs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence Properties", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Algorithm 2 employs the argmax inference function, following the basic reasoning that it is a good choice to base the parameter update on. Importantly, if the inference function is argmax and the algorithm performs an update (y * = y), this means that y * , the output of the argmax function, is a violating MA by definition. However, it is obvious that solving the inference problem and the optimal \u03b3 assignment problems jointly may result in more informed parameter (w) updates. We leave a deeper investigation of this issue to future research.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence Properties", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Property 3 (dynamic updates) The \u03b3 selection conditions paragraph states two conditions ((a) and (b)) under which the convergence proof holds. While it is trivial for SETGAMMA to generate a \u03b3 vector that respects condition (a), if there is a parameter vector w' achievable by the algorithm for which SETGAMMA cannot generate \u03b3 that respects condition (b), SWVP gets stuck when reaching w'.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence Properties", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "This problem can be solved with dynamic updates. A deep look into the convergence proof reveals that the set JJ x and the SETGAMMA function can actually differ between iterations. While this will change the bound on the number of iterations, it will not change the fact that the algorithm converges if the data is linearly separable. This makes SWVP highly flexible as it can always back off to the CSP setup of JJ x = {[L x ]}, and \u2200(x, y) \u2208 D : \u03b3(m [Lx] ) = 1, update its parameters and continue with its original JJ and SETGAMMA when this option becomes feasible. If this does not happen, the algorithm can continue till convergence with the CSP setup.", |
| "cite_spans": [ |
| { |
| "start": 451, |
| "end": 455, |
| "text": "[Lx]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convergence Properties", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The following bounds are proved: the number of updates in the separable case (see Theorem 1); the number of mistakes in the non-separable case (see Appendix B); and the probability to misclassify an unseen example (see supplementary material). It can be shown that in the general case these bounds are tighter than those of the CSP special case. We next discuss variants of SWVP.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mistake and Generalization Bounds", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Here we present types of update rules that can be implemented within SWVP. Such rule types are defined by: (a) the selection of \u03b3, which should respect the \u03b3 selection conditions (see Definition 3) and (b) the selection of JJ = {JJ x \u2286 2 [Lx] |(x, y) \u2208 D}, the substructure sets for the training examples.", |
| "cite_spans": [ |
| { |
| "start": 238, |
| "end": 242, |
| "text": "[Lx]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u03b3 Selection A first approach we consider is the aggressive approach 4 where only mixed assignments that are violations {m J : J \u2208 I v } are exploited (i.e. for all J \u2208 I nv , \u03b3(m J ) = 0). Note, that in this case condition (2) of the \u03b3 selection conditions trivially holds as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "w \u2022 J\u2208I v \u03b3(m J )\u2206\u03c6(x, y, m J ) \u2264 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The only remaining requirement is that condition (1) also holds, i.e. that J\u2208I v \u03b3(m J ) = 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The opposite, passive approach, exploits only non-violating MA's {m J : J \u2208 I nv }. However, such \u03b3 assignments do not respect \u03b3 selection condition (2), as they yield: w \u2022 J\u2208I nv \u03b3(m J )\u2206\u03c6(x, y, m J ) \u2264 0 which holds if and only if for every J \u2208 I nv , \u03b3(m J ) = 0 that in turn contradicts condition (1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Finally, we can take a balanced approach which gives a positive \u03b3 coefficient for at least one violating MA and at least one positive \u03b3 coefficient for a non-violating MA. This approach is allowed by SWVP as long as both \u03b3 selection conditions hold.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We implemented two weighting methods, both based on the concept of margin:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "(1) Weighted Margin (WM): \u03b3(m J ) = |w\u2022\u2206\u03c6(x,y,m J )| \u03b2 J \u2208JJx |w\u2022\u2206\u03c6(x,y,m J )| \u03b2 (2) Weighted Margin Rank (WMR): \u03b3(m J ) = |JJx|\u2212r |JJx| \u03b2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "where r is the rank of |w \u2022 \u2206\u03c6(x, y, m J (y * , y))| among the |w \u2022 \u2206\u03c6(x, y, m J (y * , y))| values for J \u2208 JJ x . Both schemes were implemented twice, within a balanced approach (denoted as B) and an aggressive approach (denoted as A). 5 The aggressive schemes respect both \u03b3 selection conditions. The balanced schemes, however, respect the first condition but not necessarily the second. Since all models that employ the balanced weighting schemes converged after at most 10 iterations, we did not impose this condition (which we could do by, e.g., excluding terms for J \u2208 I nv till condition (2) holds).", |
| "cite_spans": [ |
| { |
| "start": 237, |
| "end": 238, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "JJ Selection Another choice that strongly affects the updates made by SWVP is that of JJ. A choice of JJ x = 2 [Lx] , for every (x, y) \u2208 D results in an update rule which considers all possible mixing assignments derived from the predicted label y * and the gold label y. Such an update rule, however, requires computing a sum over an exponential number of terms (2 Lx ) and is therefore highly inefficient.", |
| "cite_spans": [ |
| { |
| "start": 111, |
| "end": 115, |
| "text": "[Lx]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Among the wide range of alternative approaches, in this paper we exploit single difference mixed assignments. In this approach we define: JJ = {JJ x = {{1}, {2}, . . . {L x }}|(x, y) \u2208 D}. For a training pair (x, y) \u2208 D, a predicted label y * and J = {j} \u2208 JJ x , we will have:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "m J k (y * , y) = y k k = j y * k k = j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Under this approach for the pair (x, y) \u2208 D only L x terms are summed in the SWVP update rule. We leave a further investigation of JJ selection approaches to future research.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Passive Aggressive SWVP", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Synthetic Data We experiment with synthetic data generated by a linear-chain, first-order Hidden Markov Model (HMM, (Rabiner and Juang, 1986) ).", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 141, |
| "text": "(HMM, (Rabiner and Juang, 1986)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Our learning algorithm is a liner-chain conditional random field (CRF, (Lafferty et al., 2001 )):", |
| "cite_spans": [ |
| { |
| "start": 71, |
| "end": 93, |
| "text": "(Lafferty et al., 2001", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "P (y|x) = 1 Z(x) i=1:Lx exp(w \u2022 \u03c6(y i\u22121 , y i , x)) (where Z(x)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "is a normalization factor) with binary indicator fea-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "tures {x i , y i , y i\u22121 , (x i , y i ), (y i , y i\u22121 ), (x i , y i , y i\u22121 )} for the triplet (y i , y i\u22121 , x).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "A dataset is generated by iteratively sampling K items, each is sampled as follows. We first sample a hidden state, y 1 , from a uniform prior distribution. Then, iteratively, for i = 1, 2, . . . , L x we sample an observed state from the emission probability and (for i < L x ) a hidden state from the transition probability. We experimented in 3 setups. In each setup we generated 10 datasets that were subsequently divided to a 7000 items training set, a 2000 items development set and a 1000 items test set. In all datasets, for each item, we set L x = 8. We experiment in three conditions: (1) simple(++), learnable(+++), (2) simple(++), learnable(++) and (3) simple(+), learnable(+). 6 For each dataset (3 setups, 10 datasets per setup) we train variants of the SWVP algorithm differing in the \u03b3 selection strategy (WM or WMR, Section 5), being aggressive (A) or passive (B), and in their \u03b2 parameter (\u03b2 = {0.5, 1, . . . , 5}). Training is done on the training subset and the best performing variant on the development subset is applied to the test subset. For CSP no development set is employed as there is no hyper-parameter to tune. We report averaged accuracy (fraction of observed states for which the model successfully predicts the hidden state value) across the test sets, together with the standard deviation.", |
| "cite_spans": [ |
| { |
| "start": 690, |
| "end": 691, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We also report initial dependency parsing results. We implemented our algorithms within the TurboParser (Martins et al., 2013). (2) simple(++), learnable(++): Cx = 5, Cy = 3, P (y |y) = perm(0.5, 0.3, 0.2), P (x|y) = perm(0.6, 0.15, 0.1, 0.1, 0.05).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "(3) simple(+), learnable(+): Cx = 20 , Cy = 7 , P (y |y) = perm(0.7, 0.2, 0.1, 0, . . . , 0)), P (x|y) = perm (0.4, 0.2, 0.1, 0.1, 0.1, 0, . . . , 0) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 110, |
| "end": 149, |
| "text": "(0.4, 0.2, 0.1, 0.1, 0.1, 0, . . . , 0)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dependency Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "That is, every other aspect of the parser: feature set, probabilistic pruning algorithm, inference algorithm etc., is kept fixed but training is performed with SWVP. We compare our results to the parser performance with CSP training (which comes with the standard implementation of the parser).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "We experiment with the datasets of the CoNLL 2007 shared task on multilingual dependency parsing (Nilsson et al., 2007) , for a total of 9 languages. We followed the standard train/test split of these dataset. For SWVP, we randomly sampled 1000 sentences from each training set to serve as development sets and tuned the parameters as in the synthetic data experiments. CSP is trained on the training set and applied to the test set without any development set involved. We report the Unlabeled Attachment Score (UAS) for each language and model.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 119, |
| "text": "(Nilsson et al., 2007)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "Synthetic Data Table 1 presents our results. In all three setups an SWVP algorithm is superior. Averaged accuracy differences between the best performing algorithms and CSP are: 3.72 (B-WMR, (simple(++), learnable(+++))), 5.29 (B-WM, (simple(++), learnable(++))) and 5.18 (A-WM, (simple(+), learnable(+))). In all setups SWVP outperforms CSP in terms of averaged performance (except from B-WMR for (simple(+), learnable(+))). Moreover, the weighted models are more stable than CSP, as indicated by the lower standard deviation of their accuracy scores. Finally, for the more simple and learnable datasets the SWVP models outperform CSP in the majority of cases (7-10/10).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 15, |
| "end": 22, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We measure generalization from development to test data in two ways. First, for each SWVP algorithm we count the number of times its \u03b2 parameter results in an algorithm that outperforms the CSP on the development set but not on the test set (not shown in the table). Of the 120 comparisons reported in the table (4 SWVP models, 3 setups, 10 comparisons per model/setup combination) this happened once (A-MV, (simple(++), learnable(+++)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Second, we count the number of times the best development set value of the \u03b2 hyper-parameter is also the best value on the test set, or the test set accuracy with the best development set \u03b2 is at most 0.5% lower than that with the best test set \u03b2. The Gener- alization column of the table shows that this has not happened in all of the 120 runs of SWVP.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Dependency Parsing Results are given in Table 2 . For the SWVP trained models we report three numbers: (a) B-WM is the standard setup where the \u03b2 hyper parameter is tuned on the development data; (b) For Top B-WM we first selected the models with a UAS score within 0.1% of the best development data result, and of these we report the UAS of the model that performs best on the test set; and (c) Test B-WM reports results when \u03b2 is tuned on the test set. This measure provides an upper bound on SWVP with our simplistic JJ (Section 5).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 40, |
| "end": 48, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our results indicate the potential of SWVP. Despite our simple JJ set, Top B-WM and Test B-WM improve over CSP in 5/9 and 6/9 cases in first order parsing, respectively, and in 7/9 cases in second order parsing. In the latter case, Test B-WM improves the UAS over CSP in 0.22% on average across languages. Unfortunately, SWVP still does not generalize well from train to test data as indicated, e.g., by the modest improvements B-WM achieves over CSP in only 5 of 9 languages in second order parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We presented the Structured Weighted Violations Perceptron (SWVP) algorithm, a generalization of the Structured Perceptron (CSP) algorithm that explicitly exploits the internal structure of the predicted label in its update rule. We proved the convergence of the algorithm for linearly separable training sets under certain conditions on its parameters, and provided generalization and mistake bounds.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "In experiments we explored only very simple configurations of the SWVP parameters\u03b3 and JJ. Nevertheless, several of our SWVP variants outperformed the CSP special case in synthetic data experiments. In dependency parsing experiments, SWVP demonstrated some improvements over CSP, but these do not generalize well. While we find these results somewhat encouraging, they emphasize the need to explore the much more flexible \u03b3 and JJ selection strategies allowed by SWVP (Sec. 4.2). In future work we will hence develop \u03b3 and JJ selection algorithms, where selection is ideally performed jointly with inference (property 2, Sec. 4.2), to make SWVP practically useful in NLP applications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "A Proof Observation 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Proof. For every training example (x, y) \u2208 D, it holds that: \u222a z\u2208Y(x) m J (z, y) \u2286 Y(x). As u separates the data with margin \u03b4, it holds that:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "u \u2022 \u2206\u03c6(x, y, m J (z, y)) \u2265 \u03b4 JJx , \u2200z \u2208 Y(x), J \u2208 JJ x . u \u2022 \u2206\u03c6(x, y, z) \u2265 \u03b4, \u2200z \u2208 Y(x).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Therefore also \u03b4 JJx \u2265 \u03b4. As the last inequality holds for every (x, y) \u2208 D we get that \u03b4 JJ = min (x,y)\u2208D \u03b4 JJx \u2265 \u03b4. From the same considerations it holds that R JJ \u2264 R. This is because R JJ is the radius of a subset of the dataset with radius R (proper subset if", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "\u2203(x, y) \u2208 D, [L x ] /", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "\u2208 JJ x , non-proper subset otherwise).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Here we provide a mistake bound for the algorithm in the non-separable case. We start with the following definition and observation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "Definition 8. Given an example (x i , y i ) \u2208 D, for a u, \u03b4 pair define:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "r i = u \u2022 \u03c6(x i , y i ) \u2212 max z\u2208Y(x i ) u \u2022 \u03c6(x i , z) i = max{0, \u03b4 \u2212 r i } r i JJ = u \u2022 \u03c6(x i , y i )\u2212 max z\u2208Y(x i ),J\u2208JJ x i u \u2022 \u03c6(x i , m J (z, y i )) Finally define: D u,\u03b4 = n i=1 2 i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "Observation 2. For all i: r i \u2264 r i JJ . Observation 2 easily follows from Definition 8. Following this observation we denote: r dif f = min i {r i JJ \u2212 r i } \u2265 0 and present the next theorem:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "Theorem 2. For any training sequence D, for the first pass over the training set of the CSP and the SWVP algorithms respectively, it holds that:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "#mistakes \u2212 CSP \u2264 min u: u =1,\u03b4>0 (R + D u,\u03b4 ) 2 \u03b4 2 . #mistakes \u2212 SW V P \u2264 min u: u =1,\u03b4>0 (R JJ + D u,\u03b4 ) 2 (\u03b4 + r dif f ) 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "As R JJ \u2264 R (Observation 1) and r dif f \u2265 0, we get a tighter bound for SWVP. The proof for #mistakes-CSP is given at (Collins, 2002) . The proof for #mistakes-SWVP is given below.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 133, |
| "text": "(Collins, 2002)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. We transform the representation \u03c6(x, y) \u2208 R d into a new representation \u03c8(x, y) \u2208 R d+n as follows: for i = 1, ..., d : \u03c8 i (x, y) = \u03c6 i (x, y), for j = 1, ..., n : \u03c8 d+j (x, y) = \u2206 if (x, y) = (x j , y j ) and 0 otherwise, where \u2206 > 0 is a parameter. Given a u, \u03b4 pair define v \u2208 R d+n as follows: for i = 1, ..., d : v i = u i , for j = 1, ..., n : v d+j = j \u2206 . Under these definitions we have:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "v \u2022 \u03c8(x i , y i ) \u2212 v \u2022 \u03c8(x i , z) \u2265 \u03b4, \u2200i, z \u2208 Y(x i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "For every i, z \u2208 Y(x i ), J \u2208 JJ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "x i : v \u2022 \u03c8(x i , y i ) \u2212 v \u2022 \u03c8(x i , m J (z, y i )) \u2265 \u03b4 + r dif f . \u03c8(x i , y i ) \u2212 \u03c8(x i , m J (z, y i )) 2 \u2264 (R JJ ) 2 + \u2206 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "Last, we have,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "v 2 = u 2 + n i=1 2 i \u2206 2 = 1 + D 2 u,\u03b4 \u2206 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "We get that the vector v v linearly separates the data with respect to single decision assignments with margin", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03b4 1+ D 2 U,\u03b4 \u2206 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": ". Likewise, v v linearly separates the data with respect to mixed assignments with JJ, with margin \u03b4+r dif f", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "1+ D u,\u03b4 \u2206 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": ". Notice that the first pass of SWVP with representation \u03a8 is identical to the first pass with representation \u03a6 because the parameter weight for the additional features affects only a single example of the training data and do not affect the classification of test examples. By theorem 1 this means that the first pass of SWVP with representation \u03a8 makes at most ((", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "R JJ ) 2 +\u2206 2 ) (\u03b4+r dif f ) 2 \u2022 1 + D 2 u,\u03b4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2206 2 . We minimize this w.r.t \u2206, which gives: \u2206 = R JJ D u,\u03b4 , and obtain the result guaranteed in the theorem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Mistake Bound -Non Separable Case", |
| "sec_num": null |
| }, |
| { |
| "text": "In the general case Lx is a set of output sizes, which may be finite or infinite (as in constituency parsing(Collins, 1997)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We use the notation [n] = {1, 2, . . . n}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We borrow the term passive-aggressive from(Crammer et al., 2006), despite the substantial difference between the works.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For the aggressive approach the equations for schemes (1) and (2) are changed such that JJx is replaced with I(y * , y, JJx) v .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The second author was partly supported by a research grant from the GIF Young Scientists' Program (No. I-2388-407.6/2015): Syntactic Parsing in Context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Incremental parsing with the perceptron algorithm", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Roark", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Three generative, lexicalised models for statistical parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "16--23", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proc. of ACL, pages 16-23.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proc. of EMNLP, pages 1-8.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Ultraconservative online algorithms for multiclass problems", |
| "authors": [ |
| { |
| "first": "Koby", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "The Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "", |
| "pages": "951--991", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koby Crammer and Yoram Singer. 2003. Ultraconser- vative online algorithms for multiclass problems. The Journal of Machine Learning Research, 3:951-991.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Shai Shalev-Shwartz, and Yoram Singer", |
| "authors": [ |
| { |
| "first": "Koby", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ofer", |
| "middle": [], |
| "last": "Dekel", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Keshet", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "The Journal of Machine Learning Research", |
| "volume": "7", |
| "issue": "", |
| "pages": "551--585", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Shwartz, and Yoram Singer. 2006. Online passive- aggressive algorithms. The Journal of Machine Learn- ing Research, 7:551-585.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Learning as search optimization: Approximate large margin methods for structured prediction", |
| "authors": [ |
| { |
| "first": "Hal", |
| "middle": [], |
| "last": "Daum\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "169--176", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2005. Learning as search optimization: Approximate large margin meth- ods for structured prediction. In Proc. of ICML, pages 169-176.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Large margin classification using the perceptron algorithm", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Freund", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "E" |
| ], |
| "last": "Schapire", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Machine learning", |
| "volume": "37", |
| "issue": "3", |
| "pages": "277--296", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Freund and Robert E Schapire. 1999. Large margin classification using the perceptron algorithm. Machine learning, 37(3):277-296.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "An efficient algorithm for easy-first non-directional dependency parsing", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Elhadad", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. of NAACL-HLT 2010", |
| "volume": "", |
| "issue": "", |
| "pages": "742--750", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Goldberg and Michael Elhadad. 2010. An effi- cient algorithm for easy-first non-directional depen- dency parsing. In Proc. of NAACL-HLT 2010, pages 742-750.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Knowledgebased weak supervision for information extraction of overlapping relations", |
| "authors": [ |
| { |
| "first": "Raphael", |
| "middle": [], |
| "last": "Hoffmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Congle", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiao", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "S" |
| ], |
| "last": "Weld", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Structured perceptron with inexact search", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Suphan", |
| "middle": [], |
| "last": "Fayong", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proc. of NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "142--151", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proc. of NAACL-HLT, pages 142-151.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Efficient thirdorder dependency parsers", |
| "authors": [ |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Terry Koo and Michael Collins. 2010. Efficient third- order dependency parsers. In Proc. of ACL, pages 1- 11.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando Cn", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilis- tic models for segmenting and labeling sequence data. In Proc. of ICML.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Turning on the turbo: Fast third-order nonprojective turbo parsers", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "T" |
| ], |
| "last": "Andr\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Martins", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah A", |
| "middle": [], |
| "last": "Almeida", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Prc. of ACL short papers", |
| "volume": "", |
| "issue": "", |
| "pages": "617--622", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andr\u00e9 FT Martins, Miguel Almeida, and Noah A Smith. 2013. Turning on the turbo: Fast third-order non- projective turbo parsers. In Prc. of ACL short papers, pages 617-622.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Online learning of approximate dependency parsing algorithms", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ryan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "N" |
| ], |
| "last": "Fernando", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. of EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan T McDonald and Fernando CN Pereira. 2006. On- line learning of approximate dependency parsing algo- rithms. In Proc. of EACL.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Online large-margin training of dependency parsers", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Koby", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "91--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proc. of ACL, pages 91-98.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Non-projective dependency parsing using spanning tree algorithms", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiril", |
| "middle": [], |
| "last": "Ribarov", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of EMNLP-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "523--530", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Haji\u010d. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proc. of EMNLP- HLT, pages 523-530.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Learning efficiently with approximate inference via dual losses", |
| "authors": [ |
| { |
| "first": "Ofer", |
| "middle": [], |
| "last": "Meshi", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Sontag", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| }, |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Globerson", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ofer Meshi, David Sontag, Tommi Jaakkola, and Amir Globerson. 2010. Learning efficiently with approxi- mate inference via dual losses. In Proc. of ICML.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "The conll 2007 shared task on dependency parsing", |
| "authors": [ |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Deniz", |
| "middle": [], |
| "last": "Yuret", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the CoNLL shared task session of EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "915--932", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The conll 2007 shared task on dependency parsing. In Proceedings of the CoNLL shared task session of EMNLP-CoNLL, pages 915-932. sn.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "An introduction to hidden markov models", |
| "authors": [ |
| { |
| "first": "Lawrence", |
| "middle": [], |
| "last": "Rabiner", |
| "suffix": "" |
| }, |
| { |
| "first": "Biing-Hwang", |
| "middle": [], |
| "last": "Juang", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "ASSP Magazine", |
| "volume": "3", |
| "issue": "1", |
| "pages": "4--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lawrence Rabiner and Biing-Hwang Juang. 1986. An introduction to hidden markov models. ASSP Maga- zine, IEEE, 3(1):4-16.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Multi event extraction guided by global constraints", |
| "authors": [ |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proc. of NAACL-HLT 2012", |
| "volume": "", |
| "issue": "", |
| "pages": "70--79", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roi Reichart and Regina Barzilay. 2012. Multi event extraction guided by global constraints. In Proc. of NAACL-HLT 2012, pages 70-79.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Max-margin markov networks", |
| "authors": [ |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Guestrin", |
| "suffix": "" |
| }, |
| { |
| "first": "Daphne", |
| "middle": [], |
| "last": "Koller", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ben Taskar, Carlos Guestrin, and Daphne Koller. 2004. Max-margin markov networks. In Proc. of NIPS.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Large margin methods for structured and interdependent output variables", |
| "authors": [ |
| { |
| "first": "Ioannis", |
| "middle": [], |
| "last": "Tsochantaridis", |
| "suffix": "" |
| }, |
| { |
| "first": "Thorsten", |
| "middle": [], |
| "last": "Joachims", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "In Journal of Machine Learning Research", |
| "volume": "", |
| "issue": "", |
| "pages": "1453--1484", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hof- mann, and Yasemin Altun. 2005. Large margin methods for structured and interdependent output vari- ables. In Journal of Machine Learning Research, pages 1453-1484.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Online learning of relaxed ccg grammars for parsing to logical form", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Luke", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "678--687", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luke S Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. In Proc. of EMNLP-CoNLL, pages 678-687.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Joint word segmentation and pos tagging using a single perceptron", |
| "authors": [ |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "888--896", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yue Zhang and Stephen Clark. 2008. Joint word seg- mentation and pos tagging using a single perceptron. In proc. of ACL, pages 888-896.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Example parse trees: gold tree (y, top), predicted tree (y * , middle) with arcs differing from the gold's marked with a dashed line, and m J (y * , y) for J = [2, 5] (bottom tree)." |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Denoting Dx = [Cx], Dy = [Cy], and a permutation of a vector v with perm(v), the parameters of the different setups are: (1) simple(++), learnable(+++): Cx = 5, Cy = 3, P (y |y) = perm(0.7, 0.2, 0.1), P (x|y) = perm(0.75, 0.1, 0.05, 0.05, 0.05)." |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "text": "Overall Synthetic Data Results. A-and B-denote an aggressive and a balanced approaches, respectively. Acc. (std) is the average and the standard deviation of the accuracy across 10 test sets. # Wins is the number of test sets on which the SWVP algorithm outperforms CSP. Gener. is the number of times the best \u03b2 hyper-parameter value on the development set is also the best value on the test set, or the test set accuracy with the best development set \u03b2 is at most 0.5% lower than that with the best test set \u03b2.", |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td>Language English Chinese Arabic Greek Italian Turkish Basque Catalan Hungarian</td><td>CSP 86.34 84.60 79.09 80.41 84.63 83.05 79.47 88.51 80.17</td><td>B-WM 86.4 84.5 79.17 80.20 84.64 82.89 79.54 88.46 80.07</td><td>First Order Top B-WM 86.7 85.04 79.21 80.28 84.74 82.89 79.54 88.50 80.07</td><td>Test B-WM 86.7 85.05 79.21 80.28 84.70 82.89 79.54 88.5 80.21</td><td>CSP 88.02 86.82 76.07 80.31 84.03 83.02 80.52 88.71 80.61</td><td>B-WM 87.82 86.69 75.94 80.40 84.08 83.04 80.57 88.81 80.45</td><td>Second Order Top B-WM 87.82 86.83 76.09 80.40 84.15 83.04 80.63 88.81 80.45</td><td>Test B-WM 87.92 87.02 76.09 80.61 84.28 83.31 80.64 88.82 80.55</td></tr><tr><td>Average</td><td>83.69</td><td>83.65</td><td>83.77</td><td>83.79</td><td>83.12</td><td>83.08</td><td>83.13</td><td>83.35</td></tr></table>" |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "text": "First and second order dependency parsing UAS results for CSP trained models, as well as for models trained with SWVP with a balanced \u03b3 selection (B) and with a weighted margin (WM) strategy. For explanation of the B-WM, Top B-WM, and Test B-WM see text. For each language and parsing order we highlight the best result in bold font, but this do not include results from Test B-WM as it is provided only as an upper bound on the performance of SWVP.", |
| "html": null, |
| "num": null, |
| "content": "<table/>" |
| } |
| } |
| } |
| } |