ACL-OCL / Base_JSON /prefixP /json /P09 /P09-1042.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P09-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:53:51.020446Z"
},
"title": "Dependency Grammar Induction via Bitext Projection Constraints",
"authors": [
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {
"settlement": "Philadelphia",
"region": "PA",
"country": "USA"
}
},
"email": "kuzman@seas.upenn.edu"
},
{
"first": "Jennifer",
"middle": [],
"last": "Gillenwater",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {
"settlement": "Philadelphia",
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {
"settlement": "Philadelphia",
"region": "PA",
"country": "USA"
}
},
"email": "taskar@seas.upenn.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Broad-coverage annotated treebanks necessary to train parsers do not exist for many resource-poor languages. The wide availability of parallel text and accurate parsers in English has opened up the possibility of grammar induction through partial transfer across bitext. We consider generative and discriminative models for dependency grammar induction that use word-level alignments and a source language parser (English) to constrain the space of possible target trees. Unlike previous approaches, our framework does not require full projected parses, allowing partial, approximate transfer through linear expectation constraints on the space of distributions over trees. We consider several types of constraints that range from generic dependency conservation to language-specific annotation rules for auxiliary verb analysis. We evaluate our approach on Bulgarian and Spanish CoNLL shared task data and show that we consistently outperform unsupervised methods and can outperform supervised learning for limited training data.",
"pdf_parse": {
"paper_id": "P09-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "Broad-coverage annotated treebanks necessary to train parsers do not exist for many resource-poor languages. The wide availability of parallel text and accurate parsers in English has opened up the possibility of grammar induction through partial transfer across bitext. We consider generative and discriminative models for dependency grammar induction that use word-level alignments and a source language parser (English) to constrain the space of possible target trees. Unlike previous approaches, our framework does not require full projected parses, allowing partial, approximate transfer through linear expectation constraints on the space of distributions over trees. We consider several types of constraints that range from generic dependency conservation to language-specific annotation rules for auxiliary verb analysis. We evaluate our approach on Bulgarian and Spanish CoNLL shared task data and show that we consistently outperform unsupervised methods and can outperform supervised learning for limited training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "For English and a handful of other languages, there are large, well-annotated corpora with a variety of linguistic information ranging from named entity to discourse structure. Unfortunately, for the vast majority of languages very few linguistic resources are available. This situation is likely to persist because of the expense of creating annotated corpora that require linguistic expertise (Abeill\u00e9, 2003) . On the other hand, parallel corpora between many resource-poor languages and resource-rich languages are ample, motivat-ing recent interest in transferring linguistic resources from one language to another via parallel text. For example, several early works Merlo et al., 2002) demonstrate transfer of shallow processing tools such as part-of-speech taggers and noun-phrase chunkers by using word-level alignment models (Brown et al., 1994; Och and Ney, 2000) . Alshawi et al. (2000) and Hwa et al. (2005) explore transfer of deeper syntactic structure: dependency grammars. Dependency and constituency grammar formalisms have long coexisted and competed in linguistics, especially beyond English (Mel'\u010duk, 1988) . Recently, dependency parsing has gained popularity as a simpler, computationally more efficient alternative to constituency parsing and has spurred several supervised learning approaches (Eisner, 1996; Yamada and Matsumoto, 2003a; Nivre and Nilsson, 2005; McDonald et al., 2005) as well as unsupervised induction (Klein and Manning, 2004; Smith and Eisner, 2006) . Dependency representation has been used for language modeling, textual entailment and machine translation (Haghighi et al., 2005; Chelba et al., 1997; Quirk et al., 2005; Shen et al., 2008) , to name a few tasks.",
"cite_spans": [
{
"start": 395,
"end": 410,
"text": "(Abeill\u00e9, 2003)",
"ref_id": "BIBREF0"
},
{
"start": 671,
"end": 690,
"text": "Merlo et al., 2002)",
"ref_id": "BIBREF21"
},
{
"start": 833,
"end": 853,
"text": "(Brown et al., 1994;",
"ref_id": "BIBREF3"
},
{
"start": 854,
"end": 872,
"text": "Och and Ney, 2000)",
"ref_id": "BIBREF25"
},
{
"start": 875,
"end": 896,
"text": "Alshawi et al. (2000)",
"ref_id": "BIBREF1"
},
{
"start": 901,
"end": 918,
"text": "Hwa et al. (2005)",
"ref_id": "BIBREF13"
},
{
"start": 1110,
"end": 1125,
"text": "(Mel'\u010duk, 1988)",
"ref_id": "BIBREF20"
},
{
"start": 1315,
"end": 1329,
"text": "(Eisner, 1996;",
"ref_id": "BIBREF7"
},
{
"start": 1330,
"end": 1358,
"text": "Yamada and Matsumoto, 2003a;",
"ref_id": "BIBREF32"
},
{
"start": 1359,
"end": 1383,
"text": "Nivre and Nilsson, 2005;",
"ref_id": "BIBREF23"
},
{
"start": 1384,
"end": 1406,
"text": "McDonald et al., 2005)",
"ref_id": "BIBREF19"
},
{
"start": 1441,
"end": 1466,
"text": "(Klein and Manning, 2004;",
"ref_id": "BIBREF14"
},
{
"start": 1467,
"end": 1490,
"text": "Smith and Eisner, 2006)",
"ref_id": "BIBREF28"
},
{
"start": 1599,
"end": 1622,
"text": "(Haghighi et al., 2005;",
"ref_id": "BIBREF12"
},
{
"start": 1623,
"end": 1643,
"text": "Chelba et al., 1997;",
"ref_id": "BIBREF4"
},
{
"start": 1644,
"end": 1663,
"text": "Quirk et al., 2005;",
"ref_id": "BIBREF26"
},
{
"start": 1664,
"end": 1682,
"text": "Shen et al., 2008)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Dependency grammars are arguably more robust to transfer since syntactic relations between aligned words of parallel sentences are better conserved in translation than phrase structure (Fox, 2002; Hwa et al., 2005) . Nevertheless, several challenges to accurate training and evaluation from aligned bitext remain: (1) partial word alignment due to non-literal or distant translation; (2) errors in word alignments and source language parses, (3) grammatical annotation choices that differ across languages and linguistic theories (e.g., how to analyze auxiliary verbs, conjunctions).",
"cite_spans": [
{
"start": 185,
"end": 196,
"text": "(Fox, 2002;",
"ref_id": "BIBREF8"
},
{
"start": 197,
"end": 214,
"text": "Hwa et al., 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a flexible learning framework for transferring dependency grammars via bitext using the posterior regularization framework (Gra\u00e7a et al., 2008) . In particular, we address challenges (1) and (2) by avoiding commitment to an entire projected parse tree in the target language during training. Instead, we explore formulations of both generative and discriminative probabilistic models where projected syntactic relations are constrained to hold approximately and only in expectation. Finally, we address challenge (3) by introducing a very small number of language-specific constraints that disambiguate arbitrary annotation choices. We evaluate our approach by transferring from an English parser trained on the Penn treebank to Bulgarian and Spanish. We evaluate our results on the Bulgarian and Spanish corpora from the CoNLL X shared task. We see that our transfer approach consistently outperforms unsupervised methods and, given just a few (2 to 7) languagespecific constraints, performs comparably to a supervised parser trained on a very limited corpus (30 -140 training sentences).",
"cite_spans": [
{
"start": 149,
"end": 169,
"text": "(Gra\u00e7a et al., 2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "At a high level our approach is illustrated in Figure 1(a) . A parallel corpus is word-level aligned using an alignment toolkit (Gra\u00e7a et al., 2009) and the source (English) is parsed using a dependency parser (McDonald et al., 2005) . Figure 1 (b) shows an aligned sentence pair example where dependencies are perfectly conserved across the alignment. An edge from English parent p to child c is called conserved if word p aligns to word p in the second language, c aligns to c in the second language, and p is the parent of c . Note that we are not restricting ourselves to one-to-one alignments here; p, c, p , and c can all also align to other words. After filtering to identify well-behaved sentences and high confidence projected dependencies, we learn a probabilistic parsing model using the posterior regularization framework (Gra\u00e7a et al., 2008) . We estimate both generative and discriminative models by constraining the posterior distribution over possible target parses to approximately respect projected dependencies and other rules which we describe below. In our experiments we evaluate the learned models on dependency treebanks (Nivre et al., 2007) .",
"cite_spans": [
{
"start": 128,
"end": 148,
"text": "(Gra\u00e7a et al., 2009)",
"ref_id": "BIBREF11"
},
{
"start": 210,
"end": 233,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF19"
},
{
"start": 834,
"end": 854,
"text": "(Gra\u00e7a et al., 2008)",
"ref_id": "BIBREF10"
},
{
"start": 1145,
"end": 1165,
"text": "(Nivre et al., 2007)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 47,
"end": 58,
"text": "Figure 1(a)",
"ref_id": null
},
{
"start": 236,
"end": 244,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "Unfortunately the sentence in Figure 1 (b) is highly unusual in its amount of dependency con-servation. To get a feel for the typical case, we used off-the-shelf parsers (McDonald et al., 2005) for English, Spanish and Bulgarian on two bitexts (Koehn, 2005; Tiedemann, 2007) and compared several measures of dependency conservation. For the English-Bulgarian corpus, we observed that 71.9% of the edges we projected were edges in the corpus, and we projected on average 2.7 edges per sentence (out of 5.3 tokens on average). For Spanish, we saw conservation of 64.4% and an average of 5.9 projected edges per sentence (out of 11.5 tokens on average).",
"cite_spans": [
{
"start": 170,
"end": 193,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF19"
},
{
"start": 244,
"end": 257,
"text": "(Koehn, 2005;",
"ref_id": "BIBREF15"
},
{
"start": 258,
"end": 274,
"text": "Tiedemann, 2007)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "As these numbers illustrate, directly transferring information one dependency edge at a time is unfortunately error prone for two reasons. First, parser and word alignment errors cause much of the transferred information to be wrong. We deal with this problem by constraining groups of edges rather than a single edge. For example, in some sentence pair we might find 10 edges that have both end points aligned and can be transferred. Rather than requiring our target language parse to contain each of the 10 edges, we require that the expected number of edges from this set is at least 10\u03b7, where \u03b7 is a strength parameter. This gives the parser freedom to have some uncertainty about which edges to include, or alternatively to choose to exclude some of the transferred edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "A more serious problem for transferring parse information across languages are structural differences and grammar annotation choices between the two languages. For example dealing with auxiliary verbs and reflexive constructions. Hwa et al. (2005) also note these problems and solve them by introducing dozens of rules to transform the transferred parse trees. We discuss these differences in detail in the experimental section and use our framework introduce a very small number of rules to cover the most common structural differences.",
"cite_spans": [
{
"start": 230,
"end": 247,
"text": "Hwa et al. (2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "We explored two parsing models: a generative model used by several authors for unsupervised induction and a discriminative model used for fully supervised training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "3"
},
{
"text": "The discriminative parser is based on the edge-factored model and features of the MST-Parser (McDonald et al., 2005 ). The parsing model defines a conditional distribution p \u03b8 (z | x) over each projective parse tree z for a particular sentence x, parameterized by a vector \u03b8. The prob- ability of any particular parse is",
"cite_spans": [
{
"start": 82,
"end": 115,
"text": "MST-Parser (McDonald et al., 2005",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u03b8 (z | x) \u221d z\u2208z e \u03b8\u2022\u03c6(z,x) ,",
"eq_num": "(1)"
}
],
"section": "Parsing Models",
"sec_num": "3"
},
{
"text": "where z is a directed edge contained in the parse tree z and \u03c6 is a feature function. In the fully supervised experiments we run for comparison, parameter estimation is performed by stochastic gradient ascent on the conditional likelihood function, similar to maximum entropy models or conditional random fields. One needs to be able to compute expectations of the features \u03c6(z, x) under the distribution p \u03b8 (z | x). A version of the insideoutside algorithm (Lee and Choi, 1997) performs this computation. Viterbi decoding is done using Eisner's algorithm (Eisner, 1996) . We also used a generative model based on dependency model with valence (Klein and Manning, 2004 ). Under this model, the probability of a particular parse z and a sentence with part of speech tags x is given by",
"cite_spans": [
{
"start": 459,
"end": 479,
"text": "(Lee and Choi, 1997)",
"ref_id": "BIBREF16"
},
{
"start": 557,
"end": 571,
"text": "(Eisner, 1996)",
"ref_id": "BIBREF7"
},
{
"start": 645,
"end": 669,
"text": "(Klein and Manning, 2004",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "3"
},
{
"text": "p \u03b8 (z, x) = p root (r(x)) \u2022 (2) z\u2208z p \u00acstop (z p , z d , v z ) p child (z p , z d , z c ) \u2022 x\u2208x p stop (x, left, v l ) p stop (x, right, v r )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "3"
},
{
"text": "where r(x) is the part of speech tag of the root of the parse tree z, z is an edge from parent z p to child z c in direction z d , either left or right, and v z indicates valency-false if z p has no other children further from it in direction z d than z c , true otherwise. The valencies v r /v l are marked as true if x has any children on the left/right in z, false otherwise. Gra\u00e7a et al. (2008) introduce an estimation frame-work that incorporates side-information into unsupervised problems in the form of linear constraints on posterior expectations. In grammar transfer, our basic constraint is of the form: the expected proportion of conserved edges in a sentence pair is at least \u03b7 (the exact proportion we used was 0.9, which was determined using unlabeled data as described in Section 5). Specifically, let C x be the set of directed edges projected from English for a given sentence x, then given a parse z, the proportion of conserved edges is",
"cite_spans": [
{
"start": 379,
"end": 398,
"text": "Gra\u00e7a et al. (2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "3"
},
{
"text": "f (x, z) = 1 |Cx| z\u2208z 1(z \u2208 C x ) and the expected proportion of conserved edges under distribution p(z | x) is E p [f (x, z)] = 1 |C x | z\u2208Cx p(z | x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "The posterior regularization framework (Gra\u00e7a et al., 2008) was originally defined for generative unsupervised learning. The standard objective is to minimize the negative marginal log-likelihood of the data :",
"cite_spans": [
{
"start": 39,
"end": 59,
"text": "(Gra\u00e7a et al., 2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "E[\u2212 log p \u03b8 (x)] = E[\u2212 log z p \u03b8 (z, x)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "over the parameters \u03b8 (we use E to denote expectation over the sample sentences x). We typically also add standard regularization term on \u03b8, resulting from a parameter prior \u2212 log p(\u03b8) = R(\u03b8), where p(\u03b8) is Gaussian for the MST-Parser models and Dirichlet for the valence model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "To introduce supervision into the model, we define a set Q x of distributions over the hidden variables z satisfying the desired posterior constraints in terms of linear equalities or inequalities on feature expectations (we use inequalities in this paper): In this paper, for example, we use the conservededge-proportion constraint as defined above. The marginal log-likelihood objective is then modified with a penalty for deviation from the desired set of distributions, measured by KLdivergence from the set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "Q x = {q(z) : E[f (x, z)] \u2264 b}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "Q x , KL(Q x ||p \u03b8 (z|x)) = min q\u2208Qx KL(q(z)||p \u03b8 (z|x)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "The generative learning objective is to minimize:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "E[\u2212 log p \u03b8 (x)] + R(\u03b8) + E[KL(Q x ||p \u03b8 (z | x))].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "For discriminative estimation , we do not attempt to model the marginal distribution of x, so we simply have the two regularization terms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "R(\u03b8) + E[KL(Q x ||p \u03b8 (z | x))].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "Note that the idea of regularizing moments is related to generalized expectation criteria algorithm of Mann and McCallum (2007) , as we discuss in the related work section below. In general, the objectives above are not convex in \u03b8. To optimize these objectives, we follow an Expectation Maximization-like scheme. Recall that standard EM iterates two steps. An E-step computes a probability distribution over the model's hidden variables (posterior probabilities) and an M-step that updates the model's parameters based on that distribution. The posterior-regularized EM algorithm leaves the M-step unchanged, but involves projecting the posteriors onto a constraint set after they are computed for each sentence x:",
"cite_spans": [
{
"start": 103,
"end": 127,
"text": "Mann and McCallum (2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg min q KL(q(z) p \u03b8 (z|x)) s.t. E q [f (x, z)] \u2264 b,",
"eq_num": "(3)"
}
],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "where p \u03b8 (z|x) are the posteriors. The new posteriors q(z) are used to compute sufficient statistics for this instance and hence to update the model's parameters in the M-step for either the generative or discriminative setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "The optimization problem in Equation 3 can be efficiently solved in its dual formulation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "arg min \u03bb\u22650 b \u03bb+log z p \u03b8 (z | x) exp {\u2212\u03bb f (x, z)}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "(4) Given \u03bb, the primal solution is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "q(z) = p \u03b8 (z | x) exp{\u2212\u03bb f (x, z)}/Z,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "where Z is a normalization constant. There is one dual variable per expectation constraint, and we can optimize them by projected gradient descent, similar to log-linear model estimation. The gradient with respect to \u03bb is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "b \u2212 E q [f (x, z)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": ", so it involves computing expectations under the distribution q(z). This remains tractable as long as features factor by edge, f (x, z) = z\u2208z f (x, z), because that ensures that q(z) will have the same form as p \u03b8 (z | x). Furthermore, since the constraints are per instance, we can use incremental or online version of EM (Neal and Hinton, 1998) , where we update parameters \u03b8 after posterior-constrained E-step on each instance x.",
"cite_spans": [
{
"start": 324,
"end": 347,
"text": "(Neal and Hinton, 1998)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior Regularization",
"sec_num": "4"
},
{
"text": "We conducted experiments on two languages: Bulgarian and Spanish, using each of the parsing models. The Bulgarian experiments transfer a parser from English to Bulgarian, using the Open-Subtitles corpus (Tiedemann, 2007) . The Spanish experiments transfer from English to Spanish using the Spanish portion of the Europarl corpus (Koehn, 2005) . For both corpora, we performed word alignments with the open source PostCAT (Gra\u00e7a et al., 2009) toolkit. We used the Tokyo tagger (Tsuruoka and Tsujii, 2005) to POS tag the English tokens, and generated parses using the first-order model of McDonald et al. (2005) with projective decoding, trained on sections 2-21 of the Penn treebank with dependencies extracted using the head rules of Yamada and Matsumoto (2003b) . For Bulgarian we trained the Stanford POS tagger (Toutanova et al., 2003) gtreebank corpus from CoNLL X. The Spanish Europarl data was POS tagged with the FreeLing language analyzer (Atserias et al., 2006) . The discriminative model used the same features as MST-Parser, summarized in Table 1 . In order to evaluate our method, we a baseline inspired by Hwa et al. (2005) . The baseline constructs a full parse tree from the incomplete and possibly conflicting transferred edges using a simple random process. We start with no edges and try to add edges one at a time verifying at each step that it is possible to complete the tree. We first try to add the transferred edges in random order, then for each orphan node we try all possible parents (both in random order). We then use this full labeling as supervision for a parser. Note that this baseline is very similar to the first iteration of our model, since for a large corpus the different random choices made in different sentences tend to smooth each other out. We also tried to create rules for the adoption of orphans, but the simple rules we tried added bias and performed worse than the baseline we report. Table 2 shows attachment accuracy of our method and the baseline for both language pairs under several conditions. By attachment accuracy we mean the fraction of words assigned the correct parent. The experimental details are described in this section. Link-left baselines for these corpora are much lower: 33.8% and 27.9% for Bulgarian and Spanish respectively.",
"cite_spans": [
{
"start": 203,
"end": 220,
"text": "(Tiedemann, 2007)",
"ref_id": "BIBREF29"
},
{
"start": 329,
"end": 342,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF15"
},
{
"start": 421,
"end": 441,
"text": "(Gra\u00e7a et al., 2009)",
"ref_id": "BIBREF11"
},
{
"start": 476,
"end": 503,
"text": "(Tsuruoka and Tsujii, 2005)",
"ref_id": "BIBREF31"
},
{
"start": 587,
"end": 609,
"text": "McDonald et al. (2005)",
"ref_id": "BIBREF19"
},
{
"start": 734,
"end": 762,
"text": "Yamada and Matsumoto (2003b)",
"ref_id": "BIBREF33"
},
{
"start": 814,
"end": 838,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF30"
},
{
"start": 947,
"end": 970,
"text": "(Atserias et al., 2006)",
"ref_id": "BIBREF2"
},
{
"start": 1119,
"end": 1136,
"text": "Hwa et al. (2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 1050,
"end": 1057,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1934,
"end": 1941,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Preliminary experiments showed that our word alignments were not always appropriate for syntactic transfer, even when they were correct for translation. For example, the English \"bike/V\" could be translated in French as \"aller/V en v\u00e9lo/N\", where the word \"bike\" would be aligned with \"v\u00e9lo\". While this captures some of the semantic shared information in the two languages, we have no expectation that the noun \"v\u00e9lo\" will have a similar syntactic behavior to the verb \"bike\". To prevent such false transfer, we filter out alignments between incompatible POS tags. In both language pairs, filtering out noun-verb alignments gave the biggest improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "5.1"
},
{
"text": "Both corpora also contain sentence fragments, either because of question responses or fragmented speech in movie subtitles or because of voting announcements and similar formulaic sentences in the parliamentary proceedings. We overcome this problem by filtering out sentences that do not have a verb as the English root or for which the English root is not aligned to a verb in the target language. For the subtitles corpus we also remove sentences that end in an ellipsis or contain more than one comma. Finally, following (Klein and Manning, 2004) we strip out punctuation from the sentences. For the discriminative model this did not affect results significantly but improved them slightly in most cases. We found that the generative model gets confused by punctuation and tends to predict that periods at the end of sentences are the parents of words in the sentence.",
"cite_spans": [
{
"start": 524,
"end": 549,
"text": "(Klein and Manning, 2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "5.1"
},
{
"text": "Our basic model uses constraints of the form: the expected proportion of conserved edges in a sentence pair is at least \u03b7 = 90%. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "5.1"
},
{
"text": "We call the generic model described above \"norules\" to distinguish it from the language-specific constraints we introduce in the sequel. The no rules columns of Table 2 summarize the performance in this basic setting. Discriminative models outperform the generative models in the majority of cases. The left panel of Table 3 shows the most common errors by child POS tag, as well as by true parent and guessed parent POS tag. Figure 2 shows that the discriminative model continues to improve with more transfer-type data up to at least 40 thousand sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 168,
"text": "Table 2",
"ref_id": null
},
{
"start": 317,
"end": 324,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 426,
"end": 434,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "No Language-Specific Rules",
"sec_num": "5.2"
},
{
"text": "Using the straightforward approach outlined above is a dramatic improvement over the standard link-left baseline (and the unsupervised generative model as we discuss below), however it doesn't have any information about the annotation guidelines used for the testing corpus. For example, the Bulgarian corpus has an unusual treatment of nonfinite clauses. Figure 4 shows an example. We see that the \"da\" is the parent of both the verb and its object, which is different than the treatment in the English corpus. We propose to deal with these annotation dissimilarities by creating very simple rules. For Spanish, we have three rules. The first rule sets main verbs to dominate auxiliary verbs. Specifically, whenever an auxiliary precedes a main verb the main verb becomes its parent and adopts its children; if there is only one main verb it becomes the root of the sentence; main verbs also become Figure 4 : An example where transfer fails because of different handling of reflexives and nonfinite clauses. The alignment links provide correct glosses for Bulgarian words. \"Bh\" is a past tense marker while \"se\" is a reflexive marker. parents of pronouns, adverbs, and common nouns that directly preceed auxiliary verbs. By adopting children we mean that we change the parent of transferred edges to be the adopting node. The second Spanish rule states that the first element of an adjective-noun or noun-adjective pair dominates the second; the first element also adopts the children of the second element. The third and final Spanish rule sets all prepositions to be children of the first main verb in the sentence, unless the preposition is a \"de\" located between two noun phrases. In this later case, we set the closest noun in the first of the two noun phrases as the preposition's parent.",
"cite_spans": [],
"ref_spans": [
{
"start": 356,
"end": 364,
"text": "Figure 4",
"ref_id": null
},
{
"start": 900,
"end": 908,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation guidelines and constraints",
"sec_num": "5.3"
},
{
"text": "For Bulgarian the first rule is that \"da\" should dominate all words until the next verb and adopt their noun, preposition, particle and adverb children. The second rule is that auxiliary verbs should dominate main verbs and adopt their children. We have a list of 12 Bulgarian auxiliary verbs. The \"seven rules\" experiments add rules for 5 more words similar to the rule for \"da\", specifically \"qe\", \"li\", \"kakvo\", \"ne\", \"za\". Table 3 compares the errors for different linguistic rules. When we train using the \"da\" rule and the rules for auxiliary verbs, the model learns that main verbs attach to auxiliary verbs and that \"da\" dominates its nonfinite clause. This causes an improvement in the attachment of verbs, and also drastically reduces words being attached to verbs instead of particles. The latter is expected because \"da\" is analyzed as a particle in the Bulgarian POS tagset. We see an improvement in root/verb confusions since \"da\" is sometimes errenously attached to a the following verb rather than being the root of the sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 427,
"end": 434,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Annotation guidelines and constraints",
"sec_num": "5.3"
},
{
"text": "The rightmost panel of Table 3 shows similar analysis when we also use the rules for the five other closed-class words. We see an improvement in attachments in all categories, but no qualitative change is visible. The reason for this is probably that these words are relatively rare, but by encouraging the model to add an edge, it also rules out incorrect edges that would cross it. Consequently we are seeing improvements not only directly from the constraints we enforce but also indirectly as types of edges that tend to get ruled out.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Annotation guidelines and constraints",
"sec_num": "5.3"
},
{
"text": "The generative model we use is a state of the art model for unsupervised parsing and is our only fully unsupervised baseline. As smoothing we add a very small backoff probability of 4.5 \u00d7 10 \u22125 to each learned paramter. Unfortunately, we found generative model performance was disappointing overall. The maximum unsupervised accuracy it achieved on the Bulgarian data is 47.6% with initialization from Klein and Manning (2004) and this result is not stable. Changing the initialization parameters, training sample, or maximum sentence length used for training drastically affected the results, even for samples with several thousand sentences. When we use the transferred information to constrain the learning, EM stabilizes and achieves much better performance. Even setting all parameters equal at the outset does not prevent the model from learning the dependency structure of the aligned language. The top panels in Figure 5 show the results in this setting. We see that performance is still always below the accuracy achieved by supervised training on 20 annotated sentences. However, the improvement in stability makes the algorithm much more usable. As we shall see below, the discriminative parser performs even better than the generative model.",
"cite_spans": [
{
"start": 402,
"end": 426,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 920,
"end": 928,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Generative parser",
"sec_num": "5.4"
},
{
"text": "We trained our discriminative parser for 100 iterations of online EM with a Gaussian prior variance of 100. Results for the discriminative parser are shown in the bottom panels of Figure 5 . The supervised experiments are given to provide context for the accuracies. For Bulgarian, we see that without any hints about the annotation guidelines, the transfer system performs better than an unsu-pervised parser, comparable to a supervised parser trained on 10 sentences. However, if we specify just the two rules for \"da\" and verb conjugations performance jumps to that of training on 60-70 fully labeled sentences. If we have just a little more prior knowledge about how closed-class words are handled, performance jumps above 140 fully labeled sentence equivalent.",
"cite_spans": [],
"ref_spans": [
{
"start": 180,
"end": 188,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Discriminative parser",
"sec_num": "5.5"
},
{
"text": "We observed another desirable property of the discriminative model. While the generative model can get confused and perform poorly when the training data contains very long sentences, the discriminative parser does not appear to have this drawback. In fact we observed that as the maximum training sentence length increased, the parsing performance also improved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative parser",
"sec_num": "5.5"
},
{
"text": "Our work most closely relates to Hwa et al. (2005) , who proposed to learn generative dependency grammars using Collins' parser (Collins, 1999) by constructing full target parses via projected dependencies and completion/transformation rules. Hwa et al. (2005) found that transferring dependencies directly was not sufficient to get a parser with reasonable performance, even when both the source language parses and the word alignments are performed by hand. They adjusted for this by introducing on the order of one or two dozen language-specific transformation rules to complete target parses for unaligned words and to account for diverging annotation rules. Transferring from English to Spanish in this way, they achieve 72.1% and transferring to Chinese they achieve 53.9%.",
"cite_spans": [
{
"start": 33,
"end": 50,
"text": "Hwa et al. (2005)",
"ref_id": "BIBREF13"
},
{
"start": 128,
"end": 143,
"text": "(Collins, 1999)",
"ref_id": "BIBREF5"
},
{
"start": 243,
"end": 260,
"text": "Hwa et al. (2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Our learning method is very closely related to the work of (Mann and McCallum, 2007; Mann and McCallum, 2008) who concurrently developed the idea of using penalties based on posterior expectations of features not necessarily in the model in order to guide learning. They call their method generalized expectation constraints or alternatively expectation regularization. In this volume (Druck et al., 2009) use this framework to train a dependency parser based on constraints stated as corpus-wide expected values of linguistic rules. The rules select a class of edges (e.g. auxiliary verb to main verb) and require that the expectation of these be close to some value. The main difference between this work and theirs is the source of the information (a linguistic infor-mant vs. cross-lingual projection). Also, we define our regularization with respect to inequality constraints (the model is not penalized for exceeding the required model expectations), while they require moments to be close to an estimated value. We suspect that the two learning methods could perform comparably when they exploit similar information.",
"cite_spans": [
{
"start": 59,
"end": 84,
"text": "(Mann and McCallum, 2007;",
"ref_id": "BIBREF17"
},
{
"start": 85,
"end": 109,
"text": "Mann and McCallum, 2008)",
"ref_id": "BIBREF18"
},
{
"start": 385,
"end": 405,
"text": "(Druck et al., 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this paper, we proposed a novel and effective learning scheme for transferring dependency parses across bitext. By enforcing projected dependency constraints approximately and in expectation, our framework allows robust learning from noisy partially supervised target sentences, instead of committing to entire parses. We show that discriminative training generally outperforms generative approaches even in this very weakly supervised setting. By adding easily specified languagespecific constraints, our models begin to rival strong supervised baselines for small amounts of data. Our framework can handle a wide range of constraints and we are currently exploring richer syntactic constraints that involve conservation of multiple edge constructions as well as constraints on conservation of surface length of dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We chose \u03b7 in the following way: we split the unlabeled parallel text into two portions. We trained a models with different \u03b7 on one portion and ran it on the other portion. We chose the model with the highest fraction of conserved constraints on the second portion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially supported by an Integrative Graduate Education and Research Traineeship grant from National Science Foundation (NSFIGERT 0504487), by ARO MURI SUB-TLE W911NF-07-1-0216 and by the European Projects AsIsKnown (FP6-028044) and LTfLL (FP7-212578).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Treebanks: Building and Using Parsed Corpora",
"authors": [
{
"first": "A",
"middle": [],
"last": "Abeill\u00e9",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Abeill\u00e9. 2003. Treebanks: Building and Using Parsed Corpora. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning dependency translation models as collections of finite state head transducers",
"authors": [
{
"first": "H",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Douglas",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Alshawi, S. Bangalore, and S. Douglas. 2000. Learning dependency translation models as collec- tions of finite state head transducers. Computational Linguistics, 26(1).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Freeling 1.3: Syntactic and semantic services in an open-source nlp library",
"authors": [
{
"first": "J",
"middle": [],
"last": "Atserias",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Casas",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Comelles",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gonz\u00e1lez",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Padr\u00f3",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Padr\u00f3",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Atserias, B. Casas, E. Comelles, M. Gonz\u00e1lez, L. Padr\u00f3, and M. Padr\u00f3. 2006. Freeling 1.3: Syn- tactic and semantic services in an open-source nlp library. In Proc. LREC, Genoa, Italy.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, S. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1994. The mathematics of statistical ma- chine translation: Parameter estimation. Computa- tional Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Structure and performance of a dependency language model",
"authors": [
{
"first": "C",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Engle",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Jimenez",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Mangu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Printz",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ristad",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. Eurospeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Chelba, D. Engle, F. Jelinek, V. Jimenez, S. Khudan- pur, L. Mangu, H. Printz, E. Ristad, R. Rosenfeld, A. Stolcke, and D. Wu. 1997. Structure and perfor- mance of a dependency language model. In Proc. Eurospeech.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semisupervised learning of dependency parsers using generalized expectation criteria",
"authors": [
{
"first": "G",
"middle": [],
"last": "Druck",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Druck, G. Mann, and A. McCallum. 2009. Semi- supervised learning of dependency parsers using generalized expectation criteria. In Proc. ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Three new probabilistic models for dependency parsing: an exploration",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. CoLing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner. 1996. Three new probabilistic models for de- pendency parsing: an exploration. In Proc. CoLing.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Phrasal cohesion and statistical machine translation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Fox",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "304--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Fox. 2002. Phrasal cohesion and statistical machine translation. In Proc. EMNLP, pages 304-311.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Multi-view learning over structured and nonidentical outputs",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Graca",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. UAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Ganchev, J. Graca, J. Blitzer, and B. Taskar. 2008. Multi-view learning over structured and non- identical outputs. In Proc. UAI.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Expectation maximization and posterior constraints",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gra\u00e7a",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Gra\u00e7a, K. Ganchev, and B. Taskar. 2008. Expec- tation maximization and posterior constraints. In Proc. NIPS.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Postcat -posterior constrained alignment toolkit",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gra\u00e7a",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2009,
"venue": "The Third Machine Translation Marathon",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Gra\u00e7a, K. Ganchev, and B. Taskar. 2009. Post- cat -posterior constrained alignment toolkit. In The Third Machine Translation Marathon.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Robust textual inference via graph matching",
"authors": [
{
"first": "A",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Haghighi, A. Ng, and C. Manning. 2005. Ro- bust textual inference via graph matching. In Proc. EMNLP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bootstrapping parsers via syntactic projection across parallel texts",
"authors": [
{
"first": "R",
"middle": [],
"last": "Hwa",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Weinberg",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cabezas",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Kolak",
"suffix": ""
}
],
"year": 2005,
"venue": "Natural Language Engineering",
"volume": "11",
"issue": "",
"pages": "11--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Hwa, P. Resnik, A. Weinberg, C. Cabezas, and O. Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural Language Engineering, 11:11-311.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Corpus-based induction of syntactic structure: Models of dependency and constituency",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. Manning. 2004. Corpus-based induc- tion of syntactic structure: Models of dependency and constituency. In Proc. of ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn. 2005. Europarl: A parallel corpus for statis- tical machine translation. In MT Summit.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Reestimation and bestfirst parsing algorithm for probabilistic dependency grammar",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 1997,
"venue": "WVLC-5",
"volume": "",
"issue": "",
"pages": "41--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Lee and K. Choi. 1997. Reestimation and best- first parsing algorithm for probabilistic dependency grammar. In In WVLC-5, pages 41-55.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Simple, robust, scalable semi-supervised learning via expectation regularization",
"authors": [
{
"first": "G",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Mann and A. McCallum. 2007. Simple, robust, scalable semi-supervised learning via expectation regularization. In Proc. ICML.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Generalized expectation criteria for semi-supervised learning of conditional random fields",
"authors": [
{
"first": "G",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "870--878",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Mann and A. McCallum. 2008. Generalized expec- tation criteria for semi-supervised learning of con- ditional random fields. In Proc. ACL, pages 870 - 878.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald, K. Crammer, and F. Pereira. 2005. On- line large-margin training of dependency parsers. In Proc. ACL, pages 91-98.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dependency syntax: theory and practice",
"authors": [
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Mel'\u010duk. 1988. Dependency syntax: theory and practice. SUNY. inci.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A multilingual paradigm for automatic verb classification",
"authors": [
{
"first": "P",
"middle": [],
"last": "Merlo",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Stevenson",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Tsang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Allaria",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Merlo, S. Stevenson, V. Tsang, and G. Allaria. 2002. A multilingual paradigm for automatic verb classifi- cation. In Proc. ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A new view of the EM algorithm that justifies incremental, sparse and other variants",
"authors": [
{
"first": "R",
"middle": [
"M"
],
"last": "Neal",
"suffix": ""
},
{
"first": "G",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 1998,
"venue": "Learning in Graphical Models",
"volume": "",
"issue": "",
"pages": "355--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. M. Neal and G. E. Hinton. 1998. A new view of the EM algorithm that justifies incremental, sparse and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355-368. Kluwer.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Pseudo-projective dependency parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nivre and J. Nilsson. 2005. Pseudo-projective de- pendency parsing. In Proc. ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The CoNLL 2007 shared task on dependency parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nivre, J. Hall, S. K\u00fcbler, R. McDonald, J. Nils- son, S. Riedel, and D. Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proc. EMNLP-CoNLL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och and H. Ney. 2000. Improved statistical align- ment models. In Proc. ACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Dependency treelet translation: syntactically informed phrasal smt",
"authors": [
{
"first": "C",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Menezes",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Quirk, A. Menezes, and C. Cherry. 2005. De- pendency treelet translation: syntactically informed phrasal smt. In Proc. ACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A new string-to-dependency machine translation algorithm with a target dependency language model",
"authors": [
{
"first": "L",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Shen, J. Xu, and R. Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proc. of ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Annealing structural bias in multilingual weighted grammar induction",
"authors": [
{
"first": "N",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Smith and J. Eisner. 2006. Annealing structural bias in multilingual weighted grammar induction. In Proc. ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Building a multilingual parallel subtitle corpus",
"authors": [
{
"first": "J",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. CLIN",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Tiedemann. 2007. Building a multilingual parallel subtitle corpus. In Proc. CLIN.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Feature-rich part-of-speech tagging with a cyclic dependency network",
"authors": [
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Toutanova, D. Klein, C. Manning, and Y. Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proc. HLT-NAACL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Bidirectional inference with the easiest-first strategy for tagging sequence data",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. HLT/EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Tsuruoka and J. Tsujii. 2005. Bidirectional infer- ence with the easiest-first strategy for tagging se- quence data. In Proc. HLT/EMNLP.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Statistical dependency analysis with support vector machines",
"authors": [
{
"first": "H",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. IWPT",
"volume": "",
"issue": "",
"pages": "195--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Yamada and Y. Matsumoto. 2003a. Statistical de- pendency analysis with support vector machines. In Proc. IWPT, pages 195-206.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Statistical dependency analysis with support vector machines",
"authors": [
{
"first": "H",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. IWPT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Yamada and Y. Matsumoto. 2003b. Statistical de- pendency analysis with support vector machines. In Proc. IWPT.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Inducing multilingual pos taggers and np bracketers via robust projection across aligned corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ngai",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Yarowsky and G. Ngai. 2001. Inducing multilin- gual pos taggers and np bracketers via robust pro- jection across aligned corpora. In Proc. NAACL.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Inducing multilingual text analysis tools via robust projection across aligned corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ngai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wicentowski",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Yarowsky, G. Ngai, and R. Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proc. HLT.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "(a) Overview of our grammar induction approach via bitext: the source (English) is parsed and word-aligned with target; after filtering, projected dependencies define constraints over target parse tree space, providing weak supervision for learning a target grammar. (b) An example word-aligned sentence pair with perfectly projected dependencies.",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Basic Uni-gram Features xi-word, xi-pos xi-word xi-pos xj-word, xj-pos xj-word xj-pos Basic Bi-gram Features xi-word, xi-pos, xj-word, xj-pos xi-pos, xj-word, xj-pos xi-word, xj-word, xj-pos xi-word, xi-pos, xj-pos xi-word, xi-pos, xj-word xi-word, xj-word xi-pos, xj-pos In Between POS Features xi-pos, b-pos, xj-pos Surrounding Word POS Features xi-pos, xi-pos+1, xj-pos-1, xj-pos xi-pos-1, xi-pos, xj-pos-1, xj-pos xi-pos, xi-pos+1, xj-pos, xj-pos+1 xi-pos-1, xi-pos, xj-pos, xj-pos+1",
"num": null
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"text": "Learning curve of the discriminative no-rules transfer model on Bulgarian bitext, testing on CoNLL train sentences of up to 10 words.",
"num": null
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"text": "A Spanish example where an auxiliary verb dominates the main verb.",
"num": null
},
"FIGREF5": {
"uris": null,
"type_str": "figure",
"text": "Comparison to parsers with supervised estimation and transfer. Top: Generative. Bottom: Discriminative. Left: Bulgarian. Right: Spanish. The transfer models were trained on 10k sentences all of length at most 20, all models tested on CoNLL train sentences of up to 10 words. The x-axis shows the number of examples used to train the supervised model. Boxes show first and third quartile, whiskers extend to max and min, with the line passing through the median. Supervised experiments used 30 random samples from CoNLL train.",
"num": null
},
"TABREF0": {
"html": null,
"text": "Features used by the MSTParser. For each edge (i, j), xi-word is the parent word and xj-word is the child word, analogously for POS tags. The +1 and -1 denote preceeding and following tokens in the sentence, while b denotes tokens between xi and xj.",
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF1": {
"html": null,
"text": "rules 7 rules no rules 3 rules no rules 2 rules 7 rules no rules 3 rulesTable 2: Comparison between transferring a single tree of edges and transferring all possible projected edges. The transfer models were trained on 10k sentences of length up to 20, all models tested on CoNLL train sentences of up to 10 words. Punctuation was stripped at train time.",
"content": "<table><tr><td/><td/><td colspan=\"3\">Discriminative model</td><td/><td/><td colspan=\"3\">Generative model</td><td/></tr><tr><td/><td/><td>Bulgarian</td><td/><td colspan=\"2\">Spanish</td><td/><td>Bulgarian</td><td/><td colspan=\"2\">Spanish</td></tr><tr><td colspan=\"3\">no rules 2 Baseline 63.8 72.1</td><td>72.6</td><td>67.6</td><td>69.0</td><td>66.5</td><td>69.1</td><td>71.0</td><td>68.2</td><td>71.3</td></tr><tr><td>Post.Reg.</td><td>66.9</td><td>77.5</td><td>78.3</td><td>70.6</td><td>72.3</td><td>67.8</td><td>70.7</td><td>70.8</td><td>69.5</td><td>72.8</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">on the Bul-</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"text": "Top 4 discriminative parser errors by child POS tag and true/guess parent POS tag in the Bulgarian CoNLL train data of length up to 10. Training with no language-specific rules (left); two rules (center); and seven rules (right). POS meanings: V verb, N noun, P pronoun, R preposition, T particle. Accuracies are by child or parent truth/guess POS tag.",
"content": "<table><tr><td/><td>0.75</td><td/><td/><td/><td/><td>0.8</td><td/></tr><tr><td/><td/><td/><td/><td/><td/><td>0.75</td><td/></tr><tr><td>accuracy (%)</td><td>0.65 0.7</td><td/><td/><td/><td>accuracy (%)</td><td>0.7</td><td/></tr><tr><td/><td/><td/><td/><td>supervised</td><td/><td/><td/></tr><tr><td/><td>0.6</td><td/><td/><td>no rules two rules seven rules</td><td/><td>0.65</td><td/><td>supervised no rules three rules</td></tr><tr><td/><td>20</td><td>40</td><td>60</td><td>80 100 120 140</td><td/><td>20</td><td>40</td><td>60</td><td>80 100 120 140</td></tr><tr><td/><td/><td colspan=\"3\">supervised training data size</td><td/><td/><td colspan=\"2\">supervised training data size</td></tr><tr><td/><td/><td/><td/><td/><td/><td>0.8</td><td/></tr><tr><td/><td>0.8</td><td/><td/><td/><td/><td/><td/></tr><tr><td>accuracy (%)</td><td>0.7 0.75</td><td/><td/><td>supervised no rules two rules</td><td>accuracy (%)</td><td>0.7 0.75</td><td/></tr><tr><td/><td/><td/><td/><td>seven rules</td><td/><td/><td/></tr><tr><td/><td>0.65</td><td/><td/><td/><td/><td>0.65</td><td/><td>supervised no rules</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>three rules</td></tr><tr><td/><td>20</td><td>40</td><td>60</td><td>80 100 120 140</td><td/><td>20</td><td>40</td><td>60</td><td>80 100 120 140</td></tr><tr><td/><td/><td colspan=\"3\">supervised training data size</td><td/><td/><td colspan=\"2\">supervised training data size</td></tr></table>",
"num": null,
"type_str": "table"
}
}
}
}