ACL-OCL / Base_JSON /prefixE /json /E12 /E12-1006.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E12-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:36:34.348175Z"
},
"title": "Cross-Framework Evaluation for Statistical Parsing",
"authors": [
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University",
"location": {
"postBox": "Box 635",
"postCode": "75126",
"settlement": "Uppsala",
"country": "Sweden"
}
},
"email": "tsarfaty@stp.lingfil.uu.se"
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University",
"location": {
"postBox": "Box 635",
"postCode": "75126",
"settlement": "Uppsala",
"country": "Sweden"
}
},
"email": ""
},
{
"first": "Evelina",
"middle": [],
"last": "Andersson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University",
"location": {
"postBox": "Box 635",
"postCode": "75126",
"settlement": "Uppsala",
"country": "Sweden"
}
},
"email": "evelina.andersson@lingfil.uu.se"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A serious bottleneck of comparative parser evaluation is the fact that different parsers subscribe to different formal frameworks and theoretical assumptions. Converting outputs from one framework to another is less than optimal as it easily introduces noise into the process. Here we present a principled protocol for evaluating parsing results across frameworks based on function trees, tree generalization and edit distance metrics. This extends a previously proposed framework for cross-theory evaluation and allows us to compare a wider class of parsers. We demonstrate the usefulness and language independence of our procedure by evaluating constituency and dependency parsers on English and Swedish.",
"pdf_parse": {
"paper_id": "E12-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "A serious bottleneck of comparative parser evaluation is the fact that different parsers subscribe to different formal frameworks and theoretical assumptions. Converting outputs from one framework to another is less than optimal as it easily introduces noise into the process. Here we present a principled protocol for evaluating parsing results across frameworks based on function trees, tree generalization and edit distance metrics. This extends a previously proposed framework for cross-theory evaluation and allows us to compare a wider class of parsers. We demonstrate the usefulness and language independence of our procedure by evaluating constituency and dependency parsers on English and Swedish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The goal of statistical parsers is to recover a formal representation of the grammatical relations that constitute the argument structure of natural language sentences. The argument structure encompasses grammatical relationships between elements such as subject, predicate, object, etc., which are useful for further (e.g., semantic) processing. The parses yielded by different parsing frameworks typically obey different formal and theoretical assumptions concerning how to represent the grammatical relationships in the data (Rambow, 2010) . For example, grammatical relations may be encoded on top of dependency arcs in a dependency tree (Mel'\u010duk, 1988) , they may decorate nodes in a phrase-structure tree (Marcus et al., 1993; Maamouri et al., 2004; Sima'an et al., 2001) , or they may be read off of positions in a phrase-structure tree using hard-coded conversion procedures (de Marneffe et al., 2006) . This diversity poses a challenge to cross-experimental parser evaluation, namely: How can we evaluate the performance of these different parsers relative to one another?",
"cite_spans": [
{
"start": 528,
"end": 542,
"text": "(Rambow, 2010)",
"ref_id": "BIBREF26"
},
{
"start": 642,
"end": 657,
"text": "(Mel'\u010duk, 1988)",
"ref_id": "BIBREF19"
},
{
"start": 711,
"end": 732,
"text": "(Marcus et al., 1993;",
"ref_id": "BIBREF16"
},
{
"start": 733,
"end": 755,
"text": "Maamouri et al., 2004;",
"ref_id": "BIBREF15"
},
{
"start": 756,
"end": 777,
"text": "Sima'an et al., 2001)",
"ref_id": "BIBREF28"
},
{
"start": 883,
"end": 909,
"text": "(de Marneffe et al., 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Current evaluation practices assume a set of correctly annotated test data (or gold standard) for evaluation. Typically, every parser is evaluated with respect to its own formal representation type and the underlying theory which it was trained to recover. Therefore, numerical scores of parses across experiments are incomparable. When comparing parses that belong to different formal frameworks, the notion of a single gold standard becomes problematic, and there are two different questions we have to answer. First, what is an appropriate gold standard for cross-parser evaluation? And secondly, how can we alleviate the differences between formal representation types and theoretical assumptions in order to make our comparison sound -that is, to make sure that we are not comparing apples and oranges?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A popular way to address this has been to pick one of the frameworks and convert all parser outputs to its formal type. When comparing constituency-based and dependency-based parsers, for instance, the output of constituency parsers has often been converted to dependency structures prior to evaluation (Cer et al., 2010; Nivre et al., 2010) . This solution has various drawbacks. First, it demands a conversion script that maps one representation type to another when some theoretical assumptions in one framework may be incompatible with the other one. In the constituency-to-dependency case, some constituency-based structures (e.g., coordination and ellipsis) do not comply with the single head assumption of dependency treebanks. Secondly, these scripts may be labor intensive to create, and are available mostly for English. So the evaluation protocol becomes language-dependent.",
"cite_spans": [
{
"start": 303,
"end": 321,
"text": "(Cer et al., 2010;",
"ref_id": "BIBREF7"
},
{
"start": 322,
"end": 341,
"text": "Nivre et al., 2010)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Tsarfaty et al. (2011) we proposed a general protocol for handling annotation discrepancies when comparing parses across different dependency theories. The protocol consists of three phases: converting all structures into function trees, for each sentence, generalizing the different gold standard function trees to get their common denominator, and employing an evaluation measure based on tree edit distance (TED) which discards edit operations that recover theory-specific structures. Although the protocol is potentially applicable to a wide class of syntactic representation types, formal restrictions in the procedures effectively limit its applicability only to representations that are isomorphic to dependency trees.",
"cite_spans": [
{
"start": 3,
"end": 25,
"text": "Tsarfaty et al. (2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The present paper breaks new ground in the ability to soundly compare the accuracy of different parsers relative to one another given that they employ different formal representation types and obey different theoretical assumptions. Our solution generally confines with the protocol proposed in Tsarfaty et al. (2011) but is re-formalized to allow for arbitrary linearly ordered labeled trees, thus encompassing constituency-based as well as dependency-based representations. The framework in Tsarfaty et al. (2011) assumes structures that are isomorphic to dependency trees, bypassing the problem of arbitrary branching. Here we lift this restriction, and define a protocol which is based on generalization and TED measures to soundly compare the output of different parsers.",
"cite_spans": [
{
"start": 295,
"end": 317,
"text": "Tsarfaty et al. (2011)",
"ref_id": "BIBREF30"
},
{
"start": 493,
"end": 515,
"text": "Tsarfaty et al. (2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We demonstrate the utility of this protocol by comparing the performance of different parsers for English and Swedish. For English, our parser evaluation across representation types allows us to analyze and precisely quantify previously encountered performance tendencies. For Swedish we show the first ever evaluation between dependency-based and constituency-based parsing models, all trained on the Swedish treebank data. All in all we show that our extended protocol, which can handle linearlyordered labeled trees with arbitrary branching, can soundly compare parsing results across frameworks in a representation-independent and language-independent fashion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Traditionally, different statistical parsers have been evaluated using specially designated evaluation measures that are designed to fit their representation types. Dependency trees are evaluated using attachment scores (Buchholz and Marsi, 2006) , phrase-structure trees are evaluated using ParsEval (Black et al., 1991) , LFG-based parsers postulate an evaluation procedure based on fstructures (Cahill et al., 2008) , and so on. From a downstream application point of view, there is no significance as to which formalism was used for generating the representation and which learning methods have been utilized. The bottom line is simply which parsing framework most accurately recovers a useful representation that helps to unravel the human-perceived interpretation.",
"cite_spans": [
{
"start": 220,
"end": 246,
"text": "(Buchholz and Marsi, 2006)",
"ref_id": "BIBREF4"
},
{
"start": 301,
"end": 321,
"text": "(Black et al., 1991)",
"ref_id": "BIBREF1"
},
{
"start": 397,
"end": 418,
"text": "(Cahill et al., 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: Relational Schemes for Cross-Framework Parse Evaluation",
"sec_num": "2"
},
{
"text": "Relational schemes, that is, schemes that encode the set of grammatical relations that constitute the predicate-argument structures of sentences, provide an interface to semantic interpretation. They are more intuitively understood than, say, phrase-structure trees, and thus they are also more useful for practical applications. For these reasons, relational schemes have been repeatedly singled out as an appropriate level of representation for the evaluation of statistical parsers (Lin, 1995; Carroll et al., 1998; Cer et al., 2010) .",
"cite_spans": [
{
"start": 485,
"end": 496,
"text": "(Lin, 1995;",
"ref_id": "BIBREF14"
},
{
"start": 497,
"end": 518,
"text": "Carroll et al., 1998;",
"ref_id": "BIBREF6"
},
{
"start": 519,
"end": 536,
"text": "Cer et al., 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: Relational Schemes for Cross-Framework Parse Evaluation",
"sec_num": "2"
},
{
"text": "The annotated data which statistical parsers are trained on encode these grammatical relationships in different ways. Dependency treebanks provide a ready-made representation of grammatical relations on top of arcs connecting the words in the sentence (K\u00fcbler et al., 2009) . The Penn Treebank and phrase-structure annotated resources encode partial information about grammatical relations as dash-features decorating phrase structure nodes (Marcus et al., 1993) . Treebanks like Tiger for German (Brants et al., 2002) and Talbanken for Swedish (Nivre and Megyesi, 2007) explicitly map phrase structures onto grammatical relations using dedicated edge labels. The Relational-Realizational structures of Tsarfaty and Sima'an (2008) encode relational networks (sets of relations) projected and realized by syntactic categories on top of ordinary phrase-structure nodes.",
"cite_spans": [
{
"start": 252,
"end": 273,
"text": "(K\u00fcbler et al., 2009)",
"ref_id": "BIBREF13"
},
{
"start": 441,
"end": 462,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF16"
},
{
"start": 497,
"end": 518,
"text": "(Brants et al., 2002)",
"ref_id": "BIBREF2"
},
{
"start": 545,
"end": 570,
"text": "(Nivre and Megyesi, 2007)",
"ref_id": "BIBREF21"
},
{
"start": 703,
"end": 730,
"text": "Tsarfaty and Sima'an (2008)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: Relational Schemes for Cross-Framework Parse Evaluation",
"sec_num": "2"
},
{
"text": "Function trees, as defined in Tsarfaty et al. (2011) , are linearly ordered labeled trees in which every node is labeled with the grammatical func- The algorithm for extracting a function tree from a dependency tree as in (a) is provided in Tsarfaty et al. (2011) . For a phrase-structure tree as in (b) we can replace each node label with its function (dash-feature).",
"cite_spans": [
{
"start": 30,
"end": 52,
"text": "Tsarfaty et al. (2011)",
"ref_id": "BIBREF30"
},
{
"start": 241,
"end": 263,
"text": "Tsarfaty et al. (2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: Relational Schemes for Cross-Framework Parse Evaluation",
"sec_num": "2"
},
{
"text": "In a relational-realizational structure like (c) we can remove the projection nodes (sets) and realization nodes (phrase labels), which leaves the function nodes intact.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: Relational Schemes for Cross-Framework Parse Evaluation",
"sec_num": "2"
},
{
"text": "tion of the dominated span. Function trees benefit from the same advantages as other relational schemes, namely that they are intuitive to understand, they provide the interface for semantic interpretation, and thus may be useful for downstream applications. Yet they do not suffer from formal restrictions inherent in dependency structures, for instance, the single head assumption. For many formal representation types there exists a fully deterministic, heuristics-free, procedure mapping them to function trees. In Figure 1 we illustrate some such procedures for a simple transitive sentence. Now, while all the structures at the right hand side of Figure 1 are of the same formal type (function trees), they have different tree structures due to different theoretical assumptions underlying the original formal frameworks. Once we have converted framework-specific representations into function trees, the problem of cross-framework evaluation can potentially be reduced to a cross-theory evaluation following Tsarfaty et al. 2011. The main idea is that once all structures have been converted into function trees, one can perform a formal operation called generalization in order to harmonize the differences between theories, and measure accurately the distance of parse hypotheses from the generalized gold. The generalization operation defined in Tsarfaty et al. (2011) , however, cannot handle trees that may contain unary chains, and therefore cannot be used for arbitrary function trees.",
"cite_spans": [
{
"start": 1356,
"end": 1378,
"text": "Tsarfaty et al. (2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 519,
"end": 527,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 653,
"end": 661,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Preliminaries: Relational Schemes for Cross-Framework Parse Evaluation",
"sec_num": "2"
},
{
"text": "Consider for instance (t1) and (t2) in Figure 2 . According to the definition of subsumption in Tsarfaty et al. 2011, (t1) is subsumed by (t2) and vice versa, so the two trees should be identical -but they are not. The interpretation we wish to give to a function tree such as (t1) is that the word w has both the grammatical function f1 and the grammatical function f2. This can be graphically represented as a set of labels dominating w, as in (t3). We call structures such as (t3) multifunction trees. In the next section we formally define multi-function trees, and then use them to develop our protocol for cross-framework and crosstheory evaluation. ",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 47,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Preliminaries: Relational Schemes for Cross-Framework Parse Evaluation",
"sec_num": "2"
},
{
"text": "An ordinary function tree is a linearly ordered tree T = (V, A) with yield w 1 , ..., w n , where internal nodes are labeled with grammatical function labels drawn from some set L. We use span(v) and label(v) to denote the yield and label, respectively, of an internal node v. A multi-function tree is a linearly ordered tree T = (V, A) with yield w 1 , ..., w n , where internal nodes are labeled with sets of grammatical function labels drawn from L and where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Multi-Function Trees",
"sec_num": "3.1"
},
{
"text": "v = v implies span(v) = span(v ) for all internal nodes v, v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Multi-Function Trees",
"sec_num": "3.1"
},
{
"text": ". We use labels(v) to denote the label set of an internal node v.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Multi-Function Trees",
"sec_num": "3.1"
},
{
"text": "We interpret multi-function trees as encoding sets of functional constraints over spans in function trees. Each node v in a multi-function tree represents a constraint of the form: for each l \u2208 labels(v), there should be a node v in the function tree such that span(v) = span(v ) and label(v ) = l. Whenever we have a conversion for function trees, we can efficiently collapse them into multi-function trees with no unary productions, and with label sets labeling their nodes. Thus, trees (t1) and (t2) in Figure 2 would both be mapped to tree (t3), which encodes the functional constraints encoded in either of them.",
"cite_spans": [],
"ref_spans": [
{
"start": 506,
"end": 514,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Defining Multi-Function Trees",
"sec_num": "3.1"
},
{
"text": "For dependency trees, we assume the conversion to function trees defined in Tsarfaty et al. (2011) , where head daughters always get the label 'hd'. For PTB style phrase-structure trees, we replace the phrase-structure labels with functional dash-features. In relational-realization structures we remove projection and realization nodes. Deterministic conversions exist also for Tiger style treebanks and frameworks such as LFG, but we do not discuss them here. 1",
"cite_spans": [
{
"start": 76,
"end": 98,
"text": "Tsarfaty et al. (2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Multi-Function Trees",
"sec_num": "3.1"
},
{
"text": "1 All the conversions we use are deterministic and are defined in graph-theoretic and language-independent terms. We make them available at http://stp.lingfil.uu. se/\u02dctsarfaty/unipar/index.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Multi-Function Trees",
"sec_num": "3.1"
},
{
"text": "Once we obtain multi-function trees for all the different gold standard representations in the system, we feed them to a generalization operation as shown in Figure 3 . The goal of this operation is to provide a consensus gold standard that captures the linguistic structure that the different gold theories agree on. The generalization structures are later used as the basis for the TED-based evaluation. Generalization is defined by means of subsumption. A multi-function tree subsumes another one if and only if all the constraints defined by the first tree are also defined by the second tree. So, instead of demanding equality of labels as in Tsarfaty et al. 2011, we demand set inclusion:",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 166,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Generalizing Multi-Function Trees",
"sec_num": "3.2"
},
{
"text": "T-Subsumption, denoted t , is a relation between multi-function trees that indicates that a tree \u03c0 1 is consistent with and more general than tree \u03c0 2 . Formally: \u03c0 1 t \u03c0 2 iff for every node n \u2208 \u03c0 1 there exists a node m \u2208 \u03c0 2 such that span(n) = span(m) and labels(n) \u2286 labels(m).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizing Multi-Function Trees",
"sec_num": "3.2"
},
{
"text": "T-Unification, denoted t , is an operation that returns the most general tree structure that contains the information from both input trees, and fails if such a tree does not exist. Formally: \u03c0 1 t \u03c0 2 = \u03c0 3 iff \u03c0 1 t \u03c0 3 and \u03c0 2 t \u03c0 3 , and for all \u03c0 4 such that \u03c0 1 t \u03c0 4 and \u03c0 2 t \u03c0 4 it holds that \u03c0 3 t \u03c0 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizing Multi-Function Trees",
"sec_num": "3.2"
},
{
"text": "T-Generalization, denoted t , is an operation that returns the most specific tree that is more general than both trees. Formally, \u03c0 1 t \u03c0 2 = \u03c0 3 iff \u03c0 3 t \u03c0 1 and \u03c0 3 t \u03c0 2 , and for every \u03c0 4 such that \u03c0 4 t \u03c0 1 and \u03c0 4 t \u03c0 2 it holds that \u03c0 4 t \u03c0 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizing Multi-Function Trees",
"sec_num": "3.2"
},
{
"text": "The generalization tree contains all nodes that exist in both trees, and for each node it is labeled by the intersection of the label sets dominating the same span in both trees. The unification tree contains nodes that exist in one tree or another, and for each span it is labeled by the union of all label sets for this span in either tree. If we generalize two trees and one tree has no specification for labels over a span, it does not share anything with the label set dominating the same span in the other tree, and the label set dominating this span in the generalized tree is empty. If the trees do not agree on any label for a particular span, the respective node is similarly labeled with an empty set. When we wish to unify theories, then an empty set over a span is unified with any other set dominating the same span in the other tree, without altering it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizing Multi-Function Trees",
"sec_num": "3.2"
},
{
"text": "Digression: Using Unification to Merge Information From Different Treebanks In Tsarfaty et al. 2011, only the generalization operation was used, providing the common denominator of all the gold structures and serving as a common ground for evaluation. The unification operation is useful for other NLP tasks, for instance, combining information from two different annotation schemes or enriching one annotation scheme with information from a different one. In particular, we can take advantage of the new framework to enrich the node structure reflected in one theory with grammatical functions reflected in an annotation scheme that follows a different theory. To do so, we define the Tree-Labeling-Unification operation on multi-function trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizing Multi-Function Trees",
"sec_num": "3.2"
},
{
"text": "TL-Unification, denoted tl , is an operation that returns a tree that retains the structure of the first tree and adds labels that exist over its spans in the second tree. Formally: \u03c0 1 tl \u03c0 2 = \u03c0 3 iff for every node n \u2208 \u03c0 1 there exists a node m \u2208 \u03c0 3 such that span(m) = span(n) and labels(m) = labels(n) \u222a labels(\u03c0 2 , span(n)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizing Multi-Function Trees",
"sec_num": "3.2"
},
{
"text": "Where labels(\u03c0 2 , span(n)) is the set of labels of the node with yield span(n) in \u03c0 2 if such a node exists and \u2205 otherwise. We further discuss the TL-Unification and its use for data preparation in \u00a74.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizing Multi-Function Trees",
"sec_num": "3.2"
},
{
"text": "The result of the generalization operation provides us with multi-function trees for each of the sentences in the test set representing sets of constraints on which the different gold theories agree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "We would now like to use distance-based metrics in order to measure the gap between the gold and predicted theories. The idea behind distancebased evaluation in Tsarfaty et al. (2011) is that recording the edit operations between the native gold and the generalized gold allows one to discard their cost when computing the cost of a parse hypothesis turned into the generalized gold. This makes sure that different parsers do not get penalized, or favored, due to annotation specific decisions that are not shared by other frameworks.",
"cite_spans": [
{
"start": 161,
"end": 183,
"text": "Tsarfaty et al. (2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "The problem is now that TED is undefined with respect to multi-function trees because it cannot handle complex labels. To overcome this, we convert multi-function trees into sorted function trees, which are simply function trees in which any label set is represented as a unary chain of single-labeled nodes, and the nodes are sorted according to the canonical order of their labels. 2 In case of an empty set, a 0-length chain is created, that is, no node is created over this span. Sorted function trees prevent reordering nodes in a chain in one tree to fit the order in another tree, since it would violate the idea that the set of constraints over a span in a multi-function tree is unordered.",
"cite_spans": [
{
"start": 384,
"end": 385,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "The edit operations we assume are addnode(l, i, j) and delete-node(l, i, j) where l \u2208 L is a grammatical function label and i < j define the span of a node in the tree. Insertion into a unary chain must confine with the canonical order of the labels. Every operation is assigned a cost. An edit script is a sequence of edit operations that turns a function tree \u03c0 1 into \u03c0 2 , that is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "ES(\u03c0 1 , \u03c0 2 ) = e 1 , . . . , e k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "Since all operations are anchored in spans, the sequence can be determined to have a unique order of traversing the tree (say, DFS). Different edit scripts then only differ in their set of operations on spans. The edit distance problem is finding the minimal cost script, that is, one needs to solve:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "ES * (\u03c0 1 , \u03c0 2 ) = min ES(\u03c0 1 ,\u03c0 2 ) e\u2208ES(\u03c0 1 ,\u03c0 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "cost(e)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "In the current setting, when using only add and delete operations on spans, there is only one edit script that corresponds to the minimal edit cost. So, finding the minimal edit script entails finding a single set of operations turning \u03c0 1 into \u03c0 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "We can now define \u03b4 for the ith framework, as the error of parse i relative to its native gold standard gold i and to the generalized gold gen. This is the edit cost minus the cost of the script turning parse i into gen intersected with the script turning gold i into gen. The underlying intuition is that if an operation that was used to turn parse i into gen is used to discard theory-specific information from gold i , its cost should not be counted as error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "\u03b4(parse i , gold i , gen) = cost(ES * (parse i , gen)) \u2212cost(ES * (parse i , gen) \u2229 ES * (gold i , gen))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "In order to turn distance measures into parsescores we now normalize the error relative to the size of the trees and subtract it from a unity. So the Sentence Score for parsing with framework i is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "score(parse i , gold i , gen) = 1 \u2212 \u03b4(parse i , gold i ,gen) |parse i | + |gen|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "Finally, Test-Set Average is defined by macroavaraging over all sentences in the test-set:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "1 \u2212 |testset| j=1 \u03b4(parse ij , gold ij , gen j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "|testset| j=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "|parse ij | + |gen j |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "This last formula represents the TEDEVAL metric that we use in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "A Note on System Complexity Conversion of a dependency or a constituency tree into a function tree is linear in the size of the tree. Our implementation of the generalization and unification operation is an exact, greedy, chart-based algorithm that runs in polynomial time (O(n 2 ) in n the number of terminals). The TED software that we utilize builds on the TED efficient algorithm of Zhang and Shasha (1989) which runs in",
"cite_spans": [
{
"start": 387,
"end": 410,
"text": "Zhang and Shasha (1989)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "O(|T 1 ||T 2 | min(d 1 , n 1 ) min(d 2 ,",
"eq_num": "n"
}
],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "2 )) time where d i is the tree degree (depth) and n i is the number of terminals in the respective tree (Bille, 2005) .",
"cite_spans": [
{
"start": 105,
"end": 118,
"text": "(Bille, 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TED Measures for Multi-Function Trees",
"sec_num": "3.3"
},
{
"text": "We validate our cross-framework evaluation procedure on two languages, English and Swedish. For English, we compare the performance of two dependency parsers, MaltParser (Nivre et al., 2006) and MSTParser (McDonald et al., 2005) , and two constituency-based parsers, the Berkeley parser (Petrov et al., 2006) and the Brown parser (Charniak and Johnson, 2005) . All experiments use Penn Treebank (PTB) data. For Swedish, we compare MaltParser and MSTParser with two variants of the Berkeley parser, one trained on phrase structure trees, and one trained on a variant of the Relational-Realizational representation of Tsarfaty and Sima'an (2008) . All experiments use the Talbanken Swedish Treebank (STB) data.",
"cite_spans": [
{
"start": 170,
"end": 190,
"text": "(Nivre et al., 2006)",
"ref_id": "BIBREF22"
},
{
"start": 195,
"end": 228,
"text": "MSTParser (McDonald et al., 2005)",
"ref_id": null
},
{
"start": 287,
"end": 308,
"text": "(Petrov et al., 2006)",
"ref_id": "BIBREF24"
},
{
"start": 330,
"end": 358,
"text": "(Charniak and Johnson, 2005)",
"ref_id": "BIBREF8"
},
{
"start": 616,
"end": 643,
"text": "Tsarfaty and Sima'an (2008)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We use sections 02-21 of the WSJ Penn Treebank for training and section 00 for evaluation and analysis. We use two different native gold standards subscribing to different theories of encoding grammatical relations in tree structures:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Cross-Framework Evaluation",
"sec_num": "4.1"
},
{
"text": "\u2022 THE DEPENDENCY-BASED THEORY is the theory encoded in the basic Stanford Dependencies (SD) scheme. We obtain the set of basic stanford dependency trees using the software of de Marneffe et al. (2006) and train the dependency parsers directly on it.",
"cite_spans": [
{
"start": 178,
"end": 200,
"text": "Marneffe et al. (2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "English Cross-Framework Evaluation",
"sec_num": "4.1"
},
{
"text": "\u2022 THE CONSTITUENCY-BASED THEORY is the theory reflected in the phrase-structure representation of the PTB (Marcus et al., 1993) enriched with function labels compatible with the Stanford Dependencies (SD) scheme. We obtain trees that reflect this theory by TL-Unification of the PTB multifunction trees with the SD multi-function trees (PTB tl SD) as illustrated in Figure 4 .",
"cite_spans": [
{
"start": 106,
"end": 127,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 366,
"end": 374,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "English Cross-Framework Evaluation",
"sec_num": "4.1"
},
{
"text": "The theory encoded in the multi-function trees corresponding to SD is different from the one obtained by our TL-Unification, as may be seen from the difference between the flat SD multifunction tree and the result of the PTB tl SD in Figure 4 . Another difference concerns coordination structures, encoded as binary branching trees in SD and as flat productions in the PTB tl SD. Such differences are not only observable but also quantifiable, and using our redefined TED metric the cross-theory overlap is 0.8571.",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 242,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "English Cross-Framework Evaluation",
"sec_num": "4.1"
},
{
"text": "The two dependency parsers were trained using the same settings as in Tsarfaty et al. (2011) , using SVMTool (Gim\u00e9nez and M\u00e0rquez, 2004) to predict part-of-speech tags at parsing time. The two constituency parsers were used with default settings and were allowed to predict their own partof-speech tags. We report three different evaluation metrics for the different experiments: Figure 4 : Conversion of PTB and SD tree to multifunction trees, followed by TL-Unification of the trees. Note that some PTB nodes remain without an SD label.",
"cite_spans": [
{
"start": 70,
"end": 92,
"text": "Tsarfaty et al. (2011)",
"ref_id": "BIBREF30"
},
{
"start": 109,
"end": 136,
"text": "(Gim\u00e9nez and M\u00e0rquez, 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 380,
"end": 388,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "English Cross-Framework Evaluation",
"sec_num": "4.1"
},
{
"text": "\u2022 LAS/UAS (Buchholz and Marsi, 2006) \u2022 PARSEVAL (Black et al., 1991) \u2022 TEDEVAL as defined in Section 3",
"cite_spans": [
{
"start": 10,
"end": 36,
"text": "(Buchholz and Marsi, 2006)",
"ref_id": "BIBREF4"
},
{
"start": 48,
"end": 68,
"text": "(Black et al., 1991)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "English Cross-Framework Evaluation",
"sec_num": "4.1"
},
{
"text": "We use LAS/UAS for dependency parsers that were trained on the same dependency theory. We use ParseEval to evaluate phrase-structure parsers that were trained on PTB trees in which dashfeatures and empty traces are removed. We use our implementation of TEDEVAL to evaluate parsing results across all frameworks under two different scenarios: 3 TEDEVAL SINGLE evaluates against the native gold multi-function trees. TEDEVAL MULTIPLE evaluates against the generalized (cross-theory) multi-function trees. Unlabeled TEDEVAL scores are obtained by simply removing all labels from the multi-function nodes, and using unlabeled edit operations. We calculate pairwise statistical significance using a shuffling test with 10K iterations (Cohen, 1995) . Tables 1 and 2 present the results of our crossframework evaluation for English Parsing. In the left column of Table 1 we report ParsEval scores for constituency-based parsers. As expected, F-Scores for the Brown parser are higher than the F-Scores of the Berkeley parser. F-Scores are however not applicable across frameworks. In the rightmost column of Table 1 we report the LAS/UAS results for all parsers. If a parser yields a constituency tree, it is converted to and evaluated on SD. Here we see that MST outperforms Malt, though the differences for labeled dependencies are insignificant. We also observe here a familiar pattern from Cer et al. (2010) and others, where the constituency parsers significantly outperform the dependency parsers after conversion of their output into dependencies.",
"cite_spans": [
{
"start": 729,
"end": 742,
"text": "(Cohen, 1995)",
"ref_id": "BIBREF9"
},
{
"start": 1386,
"end": 1403,
"text": "Cer et al. (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 856,
"end": 863,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1100,
"end": 1107,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "English Cross-Framework Evaluation",
"sec_num": "4.1"
},
{
"text": "The conversion to SD allows one to compare results across formal frameworks, but not without a cost. The conversion introduces a set of annotation specific decisions which may introduce a bias into the evaluation. In the middle column of Table 1 we report the TEDEVAL metrics measured against the generalized gold standard for all parsing frameworks. We can now confirm that the constituency-based parsers significantly outperform the dependency parsers, and that this is not due to specific theoretical decisions which are seen to affect LAS/UAS metrics (Schwartz et al., 2011) . For the dependency parsers we now see that Malt outperforms MST on labeled dependencies slightly, but the difference is insignificant.",
"cite_spans": [
{
"start": 555,
"end": 578,
"text": "(Schwartz et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "English Cross-Framework Evaluation",
"sec_num": "4.1"
},
{
"text": "The fact that the discrepancy in theoretical assumptions between different frameworks indeed affects the conversion-based evaluation procedure is reflected in the results we report in Table 2 . Here the leftmost and rightmost columns report TEDEVAL scores against the own native gold (SINGLE) and the middle column against the generalized gold (MULTIPLE). Had the theories for SD and PTB tl SD been identical, TEDEVAL SINGLE and TEDEVAL MULTIPLE would have been equal in each line. Because of theoretical discrepancies, we see small gaps in parser performance between these cases. Our protocol ensures that such discrepancies do not bias the results.",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 191,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "English Cross-Framework Evaluation",
"sec_num": "4.1"
},
{
"text": "We use the standard training and test sets of the Swedish Treebank (Nivre and Megyesi, 2007) with two gold standards presupposing different theories:",
"cite_spans": [
{
"start": 67,
"end": 92,
"text": "(Nivre and Megyesi, 2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Framework Swedish Parsing",
"sec_num": "4.2"
},
{
"text": "\u2022 THE DEPENDENCY-BASED THEORY is the dependency version of the Swedish Treebank. All trees are projectivized (STB-Dep).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Framework Swedish Parsing",
"sec_num": "4.2"
},
{
"text": "\u2022 THE CONSTITUENCY-BASED THEORY is the standard Swedish Treebank with grammatical function labels on the edges of constituency structures (STB). Because there are no parsers that can output the complete STB representation including edge labels, we experiment with two variants of this theory, one which is obtained by simply removing the edge labels and keeping only the phrase-structure labels (STB-PS) and one which is loosely based on the Relational-Realizational scheme of Tsarfaty and Sima'an (2008) but excludes the projection set nodes (STB-RR). RR trees only add function nodes to PS trees, and it holds that STB-PS t STB-RR=STB-PS. The overlap between the theories expressed in multifunction trees originating from STB-Dep and STB-RR is 0.7559. Our evaluation protocol takes into account such discrepancies while avoiding biases that may be caused due to these differences.",
"cite_spans": [
{
"start": 477,
"end": 504,
"text": "Tsarfaty and Sima'an (2008)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Framework Swedish Parsing",
"sec_num": "4.2"
},
{
"text": "We evaluate MaltParser, MSTParser and two versions of the Berkeley parser, one trained on STB-PS and one trained on STB-RR. We use predicted part-of-speech tags for the dependency parsers, using the HunPoS tagger (Megyesi, 2009) , but let the Berkeley parser predict its own tags. We use the same evaluation metrics and procedures as before. Prior to evaluating RR trees using ParsEval we strip off the added function nodes. Prior to evaluating them using TedEval we strip off the phrase-structure nodes. Tables 3 and 4 summarize the parsing results for the different Swedish parsers. In the leftmost column of table 3 we present the constituencybased evaluation measures.",
"cite_spans": [
{
"start": 213,
"end": 228,
"text": "(Megyesi, 2009)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Framework Swedish Parsing",
"sec_num": "4.2"
},
{
"text": "Interestingly, the Berkeley RR instantiation performs better than when training the Berkeley parser on PS trees. These constituency-based scores however have a limited applicability, and we cannot use them to compare constituency and dependency parsers. In the rightmost column of Table 3 we report the LAS/UAS results for the two dependency parsers.",
"cite_spans": [],
"ref_spans": [
{
"start": 281,
"end": 288,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Cross-Framework Swedish Parsing",
"sec_num": "4.2"
},
{
"text": "Here we see higher performance demonstrated by MST on both labeled and unlabeled dependencies, but the differences on labeled dependencies are insignificant. Since there is no automatic procedure for converting bare-bone phrase-structure Swedish trees to dependency trees, we cannot use LAS/UAS to compare across frameworks, and we use TEDEVAL for cross-framework evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Framework Swedish Parsing",
"sec_num": "4.2"
},
{
"text": "Training the Berkeley parser on RR trees which encode a mapping of PS nodes to grammatical functions allows us to compare parse results for trees belonging to the STB theory with trees obeying the STB-Dep theory. For unlabeled TEDE-VAL scores, the dependency parsers perform at the same level as the constituency parser, though the difference is insignificant. For labeled TEDEVAL the dependency parsers significantly outperform the constituency parser. When considering only the dependency parsers, there is a small advantage for Malt on labeled dependencies, and an advantage for MST on unlabeled dependencies, but the latter is insignificant. This effect is replicated in Table 4 where we evaluate dependency parsers using TEDEVAL against their own gold theories. Table 4 further confirms that there is a gap between the STB and the STB-Dep theories, reflected in the scores against the native and generalized gold.",
"cite_spans": [],
"ref_spans": [
{
"start": 675,
"end": 682,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Cross-Framework Swedish Parsing",
"sec_num": "4.2"
},
{
"text": "We presented a formal protocol for evaluating parsers across frameworks and used it to soundly compare parsing results for English and Swedish. Our approach follows the three-phase protocol of Tsarfaty et al. (2011) , namely: (i) obtaining a formal common ground for the different representation types, (ii) computing the theoretical common ground for each test sentence, and (iii) counting only what counts, that is, measuring the distance between the common ground and the parse tree while discarding annotation-specific edits.",
"cite_spans": [
{
"start": 193,
"end": 215,
"text": "Tsarfaty et al. (2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "A pre-condition for applying our protocol is the availability of a relational interpretation of trees in the different frameworks. For dependency frameworks this is straightforward, as these relations are encoded on top of dependency arcs. For constituency trees with an inherent mapping of nodes onto grammatical relations (Merlo and Musillo, 2005; Gabbard et al., 2006; Tsarfaty and Sima'an, 2008) , a procedure for reading relational schemes off of the trees is trivial to implement.",
"cite_spans": [
{
"start": 324,
"end": 349,
"text": "(Merlo and Musillo, 2005;",
"ref_id": "BIBREF20"
},
{
"start": 350,
"end": 371,
"text": "Gabbard et al., 2006;",
"ref_id": "BIBREF11"
},
{
"start": 372,
"end": 399,
"text": "Tsarfaty and Sima'an, 2008)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "For parsers that are trained on and parse into bare-bones phrase-structure trees this is not so. Reading off the relational structure may be more costly and require interjection of additional theoretical assumptions via manually written scripts. Scripts that read off grammatical relations based on tree positions work well for configurational languages such as English (de Marneffe et al., 2006) but since grammatical relations are reflected differently in different languages (Postal and Perlmutter, 1977; Bresnan, 2000) , a procedure to read off these relations in a languageindependent fashion from phrase-structure trees does not, and should not, exist (Rambow, 2010) .",
"cite_spans": [
{
"start": 370,
"end": 396,
"text": "(de Marneffe et al., 2006)",
"ref_id": "BIBREF10"
},
{
"start": 478,
"end": 507,
"text": "(Postal and Perlmutter, 1977;",
"ref_id": "BIBREF25"
},
{
"start": 508,
"end": 522,
"text": "Bresnan, 2000)",
"ref_id": "BIBREF3"
},
{
"start": 658,
"end": 672,
"text": "(Rambow, 2010)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The crucial point is that even when using external scripts for recovering a relational scheme for phrase-structure trees, our protocol has a clear advantage over simply scoring converted trees. Manually created conversion scripts alter the theoretical assumptions inherent in the trees and thus may bias the results. Our generalization operation and three-way TED make sure that theory-specific idiosyncrasies injected through such scripts do not lead to over-penalizing or over-crediting theory-specific structural variations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Certain linguistic structures cannot yet be evaluated with our protocol because of the strict assumption that the labeled spans in a parse form a tree. In the future we plan to extend the protocol for evaluating structures that go beyond linearlyordered trees in order to allow for non-projective trees and directed acyclic graphs. In addition, we plan to lift the restriction that the parse yield is known in advance, in order to allow for evaluation of joint parse-segmentation hypotheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We developed a protocol for comparing parsing results across different theories and representation types which is framework-independent in the sense that it can accommodate any formal syntactic framework that encodes grammatical relations, and it is language-independent in the sense that there is no language specific knowledge encoded in the procedure. As such, this protocol is adequate for parser evaluation in cross-framework and cross-language tasks and parsing competitions, and using it across the board is expected to open new horizons in our understanding of the strengths and weaknesses of different parsers in the face of different theories and different data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The Proposal: Cross-Framework Evaluation with Multi-Function Trees Our proposal is a three-phase evaluation protocol in the spirit ofTsarfaty et al. (2011). First, we obtain a formal common ground for all frameworks in terms of multi-function trees. Then we obtain a theoretical common ground by means of tree-generalization on gold trees. Finally, we calculate TED-based scores that discard the cost of annotation-specific edits. In this section, we define multi-function trees and update the treegeneralization and TED-based metrics to handle multi-function trees that reflect different theories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The ordering can be alphabetic, thematic, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our TedEval software can be downloaded at http://stp.lingfil.uu.se/\u02dctsarfaty/ unipar/download.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgments We thank David McClosky, Marco Khulmann, Yoav Goldberg and three anonymous reviewers for useful comments. We further thank Jennifer Foster for the Brown parses and parameter files. This research is partly funded by the Swedish National Science Foundation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A survey on tree edit distance and related. problems",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Bille",
"suffix": ""
}
],
"year": 2005,
"venue": "Theoretical Computer Science",
"volume": "337",
"issue": "",
"pages": "217--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Bille. 2005. A survey on tree edit distance and related. problems. Theoretical Computer Science, 337:217-239.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A procedure for quantitatively comparing the syntactic coverage of English grammars",
"authors": [
{
"first": "Ezra",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"P"
],
"last": "Abney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickenger",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Gdaniec",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Hindle",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Ingria",
"suffix": ""
},
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"L"
],
"last": "Klavans",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Liberman",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Tomek",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the DARPA Workshop on Speech and Natural Language",
"volume": "",
"issue": "",
"pages": "306--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ezra Black, Steven P. Abney, D. Flickenger, Clau- dia Gdaniec, Ralph Grishman, P. Harrison, Don- ald Hindle, Robert Ingria, Frederick Jelinek, Ju- dith L. Klavans, Mark Liberman, Mitchell P. Mar- cus, Salim Roukos, Beatrice Santorini, and Tomek Strzalkowski. 1991. A procedure for quantitatively comparing the syntactic coverage of English gram- mars. In Proceedings of the DARPA Workshop on Speech and Natural Language, pages 306-311.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Tiger treebank",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Dipper",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Lezius",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of TLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolf- gang Lezius, and George Smith. 2002. The Tiger treebank. In Proceedings of TLT.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Lexical-Functional Syntax",
"authors": [
{
"first": "Joan",
"middle": [],
"last": "Bresnan",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joan Bresnan. 2000. Lexical-Functional Syntax. Blackwell.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "CoNLL-X shared task on multilingual dependency parsing",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Buchholz",
"suffix": ""
},
{
"first": "Erwin",
"middle": [],
"last": "Marsi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of CoNLL-X",
"volume": "",
"issue": "",
"pages": "149--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of CoNLL-X, pages 149-164.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Wide-coverage deep statistical parsing using automatic dependency structure annotation",
"authors": [
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Burke",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Ruth",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Donovan",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Van Genabith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "1",
"pages": "81--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aoife Cahill, Michael Burke, Ruth O'Donovan, Stefan Riezler, Josef van Genabith, and Andy Way. 2008. Wide-coverage deep statistical parsing using auto- matic dependency structure annotation. Computa- tional Linguistics, 34(1):81-124.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Parser evaluation: A survey and a new proposal",
"authors": [
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Sanfilippo",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "447--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Carroll, Edward Briscoe, and Antonio Sanfilippo. 1998. Parser evaluation: A survey and a new pro- posal. In Proceedings of LREC, pages 447-454.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Parsing to Stanford Dependencies: Trade-offs between speed and accuracy",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Marie-Catherine de Marneffe, Daniel Ju- rafsky, and Christopher D. Manning. 2010. Pars- ing to Stanford Dependencies: Trade-offs between speed and accuracy. In Proceedings of LREC.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Coarseto-fine n-best parsing and maxent discriminative reranking",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse- to-fine n-best parsing and maxent discriminative reranking. In Proceedings of ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Empirical Methods for Artificial Intelligence",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Cohen. 1995. Empirical Methods for Artificial Intelligence. The MIT Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "449--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC, pages 449-454.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Fully parsing the Penn treebank",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Gabbard",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Kulick",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceeding of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "184--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Gabbard, Mitchell Marcus, and Seth Kulick. 2006. Fully parsing the Penn treebank. In Proceed- ing of HLT-NAACL, pages 184-191.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "SVMTool: A general POS tagger generator based on support vector machines",
"authors": [
{
"first": "Jes\u00fas",
"middle": [],
"last": "Gim\u00e9nez",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jes\u00fas Gim\u00e9nez and Llu\u00eds M\u00e0rquez. 2004. SVMTool: A general POS tagger generator based on support vector machines. In Proceedings of LREC.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dependency Parsing. Number 2 in Synthesis Lectures on Human Language Technologies",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra K\u00fcbler, Ryan McDonald, and Joakim Nivre. 2009. Dependency Parsing. Number 2 in Synthesis Lectures on Human Language Technologies. Mor- gan & Claypool Publishers.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A dependency-based method for evaluating broad-coverage parsers",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of IJCAI-95",
"volume": "",
"issue": "",
"pages": "1420--1425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 1995. A dependency-based method for evaluating broad-coverage parsers. In Proceedings of IJCAI-95, pages 1420-1425.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Penn Arabic treebank: Building a large-scale annotated Arabic corpus",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Maamouri",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Buckwalter",
"suffix": ""
},
{
"first": "Wigdan",
"middle": [],
"last": "Mekki",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of NEMLAR International Conference on Arabic Language Resources and Tools",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Maamouri, Ann Bies, Tim Buckwalter, and Wigdan Mekki. 2004. The Penn Arabic treebank: Building a large-scale annotated Arabic corpus. In Proceedings of NEMLAR International Conference on Arabic Language Resources and Tools.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19:313-330.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Non-projective dependency parsing using spanning tree algorithms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Kiril",
"middle": [],
"last": "Ribarov",
"suffix": ""
}
],
"year": 2005,
"venue": "HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Haji\u010d. 2005. Non-projective dependency pars- ing using spanning tree algorithms. In HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Lan- guage Processing, pages 523-530, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The open source tagger Hun-PoS for Swedish",
"authors": [
{
"first": "Beata",
"middle": [],
"last": "Megyesi",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 17th Nordic Conference of Computational Linguistics (NODAL-IDA)",
"volume": "",
"issue": "",
"pages": "239--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beata Megyesi. 2009. The open source tagger Hun- PoS for Swedish. In Proceedings of the 17th Nordic Conference of Computational Linguistics (NODAL- IDA), pages 239-241.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dependency Syntax: Theory and Practice",
"authors": [
{
"first": "Igor",
"middle": [],
"last": "Mel",
"suffix": ""
},
{
"first": "'",
"middle": [],
"last": "\u010cuk",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor Mel'\u010duk. 1988. Dependency Syntax: Theory and Practice. State University of New York Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Accurate function parsing",
"authors": [
{
"first": "Paola",
"middle": [],
"last": "Merlo",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Musillo",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "620--627",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paola Merlo and Gabriele Musillo. 2005. Accurate function parsing. In Proceedings of EMNLP, pages 620-627.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bootstrapping a Swedish Treebank using cross-corpus harmonization and annotation projection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Beata",
"middle": [],
"last": "Megyesi",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of TLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Beata Megyesi. 2007. Bootstrap- ping a Swedish Treebank using cross-corpus har- monization and annotation projection. In Proceed- ings of TLT.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Maltparser: A data-driven parser-generator for dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "2216--2219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser: A data-driven parser-generator for de- pendency parsing. In Proceedings of LREC, pages 2216-2219.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Evaluation of dependency parsers on unbounded dependencies",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "813--821",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Laura Rimell, Ryan McDonald, and Carlos G\u00f3mez-Rodr\u00edguez. 2010. Evaluation of de- pendency parsers on unbounded dependencies. In Proceedings of COLING, pages 813-821.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning accurate, compact, and interpretable tree annotation",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Thibaux",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and in- terpretable tree annotation. In Proceedings of ACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Toward a universal characterization of passivization",
"authors": [
{
"first": "M",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Postal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Perlmutter",
"suffix": ""
}
],
"year": 1977,
"venue": "Proceedings of the 3rd Annual Meeting of the Berkeley Linguistics Society",
"volume": "",
"issue": "",
"pages": "394--417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul M. Postal and David M. Perlmutter. 1977. To- ward a universal characterization of passivization. In Proceedings of the 3rd Annual Meeting of the Berkeley Linguistics Society, pages 394-417.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The simple truth about dependency and phrase structure representations: An opinion piece",
"authors": [
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of HLT-ACL",
"volume": "",
"issue": "",
"pages": "337--340",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Owen Rambow. 2010. The simple truth about de- pendency and phrase structure representations: An opinion piece. In Proceedings of HLT-ACL, pages 337-340.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Neutralizing linguistically problematic annotations in unsupervised dependency parsing evaluation",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "663--672",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Omri Abend, Roi Reichart, and Ari Rappoport. 2011. Neutralizing linguistically prob- lematic annotations in unsupervised dependency parsing evaluation. In Proceedings of ACL, pages 663-672.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Building a Tree-Bank for Modern Hebrew Text",
"authors": [
{
"first": "Khalil",
"middle": [],
"last": "Sima'an",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Itai",
"suffix": ""
},
{
"first": "Yoad",
"middle": [],
"last": "Winter",
"suffix": ""
}
],
"year": 2001,
"venue": "Traitement Automatique des Langues",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khalil Sima'an, Alon Itai, Yoad Winter, Alon Altman, and Noa Nativ. 2001. Building a Tree-Bank for Modern Hebrew Text. In Traitement Automatique des Langues.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Relational-Realizational parsing",
"authors": [
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "Khalil",
"middle": [],
"last": "Sima'an",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of CoLing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reut Tsarfaty and Khalil Sima'an. 2008. Relational- Realizational parsing. In Proceedings of CoLing.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Evaluating dependency parsing: Robust and heuristics-free cross-framework evaluation",
"authors": [
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Evelina",
"middle": [],
"last": "Andersson",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reut Tsarfaty, Joakim Nivre, and Evelina Andersson. 2011. Evaluating dependency parsing: Robust and heuristics-free cross-framework evaluation. In Pro- ceedings of EMNLP.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Simple fast algorithms for the editing distance between trees and related problems",
"authors": [
{
"first": "Kaizhong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dennis",
"middle": [],
"last": "Shasha",
"suffix": ""
}
],
"year": 1989,
"venue": "In SIAM Journal of Computing",
"volume": "18",
"issue": "",
"pages": "1245--1262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaizhong Zhang and Dennis Shasha. 1989. Sim- ple fast algorithms for the editing distance between trees and related problems. In SIAM Journal of Computing, volume 18, pages 1245-1262.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Deterministic conversion into function trees.",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Unary chains in function trees",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "The Evaluation Protocol. Different formal frameworks yield different parse and gold formal types. All types are transformed into multi-function trees. All gold trees enter generalization to yield a new gold for each sentence. The different \u03b4 arcs represent the different edit scripts used for calculating the TED-based scores.",
"num": null
},
"TABREF1": {
"text": "",
"num": null,
"content": "<table><tr><td colspan=\"4\">: English cross-framework evaluation: Three</td></tr><tr><td colspan=\"4\">measures as applicable to the different schemes. Bold-</td></tr><tr><td colspan=\"4\">face scores are highest in their column. Italic scores</td></tr><tr><td colspan=\"4\">are the highest for dependency parsers in their column.</td></tr><tr><td>Formalism</td><td>PS Trees</td><td>MF Trees</td><td>Dep Trees</td></tr><tr><td>Theory</td><td colspan=\"2\">PTB lt SD (PTB lt SD)</td><td>SD</td></tr><tr><td/><td/><td>t SD</td><td/></tr><tr><td>Metrics</td><td>TEDEVAL</td><td>TEDEVAL</td><td>TEDEVAL</td></tr><tr><td/><td>SINGLE</td><td>MULTIPLE</td><td>SINGLE</td></tr><tr><td>MALT</td><td>N/A</td><td>U: 0.9525 L: 0.9088</td><td>U: 0.9524 L: 0.9186</td></tr><tr><td>MST</td><td>N/A</td><td>U: 0.9549 L: 0.9049</td><td>U: 0.9548 L: 0.9149</td></tr><tr><td>BERKELEY</td><td>U: 0.9645 L: 0.9271</td><td>U: 0.9677 L: 0.9227</td><td>U: 0.9649 L: 0.9324</td></tr><tr><td>BROWN</td><td>U: 0.9667 L: 0.9301</td><td>U: 9702 L: 9264</td><td>U: 0.9679 L: 0.9362</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "",
"num": null,
"content": "<table><tr><td>: English cross-framework evaluation: TEDE-</td></tr><tr><td>VAL scores against gold and generalized gold. Bold-</td></tr><tr><td>face scores are highest in their column. Italic scores</td></tr><tr><td>are highest for dependency parsers in their column.</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF4": {
"text": "",
"num": null,
"content": "<table><tr><td colspan=\"4\">: Swedish cross-framework evaluation: Three</td></tr><tr><td colspan=\"4\">measures as applicable to the different schemes. Bold-</td></tr><tr><td colspan=\"3\">face scores are the highest in their column.</td><td/></tr><tr><td>Formalism</td><td>PS Trees</td><td colspan=\"2\">MF Trees Dep Trees</td></tr><tr><td>Theory</td><td>STB</td><td>STB t Dep</td><td>Dep</td></tr><tr><td>Metrics</td><td>TEDEVAL</td><td colspan=\"2\">TEDEVAL TEDEVAL</td></tr><tr><td/><td>SINGLE</td><td>MULTIPLE</td><td>SINGLE</td></tr><tr><td>MALT</td><td>N/A</td><td>U: 0.9266 L: 0.8225</td><td>U: 0.9264 L: 0.8372</td></tr><tr><td>MST</td><td>N/A</td><td>U: 0.9275 L: 0.8121</td><td>U: 0.9272 L: 0.8275</td></tr><tr><td>BKLY-STB-RR</td><td>U: 0.9239 L: 0.7946</td><td>U: 0.9281 L: 0.7861</td><td>N/A</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF5": {
"text": "Swedish cross-framework evaluation: TEDE-VAL scores against the native gold and the generalized gold. Boldface scores are the highest in their column.",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}