| { |
| "paper_id": "E17-1018", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T10:51:58.855543Z" |
| }, |
| "title": "A method for in-depth comparative evaluation: How (dis)similar are outputs of POS taggers, dependency parsers and coreference resolvers really?", |
| "authors": [ |
| { |
| "first": "Don", |
| "middle": [], |
| "last": "Tuggener", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Zurich Institute of Computational Linguistics", |
| "location": {} |
| }, |
| "email": "tuggener@cl.uzh.ch" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper proposes a generic method for the comparative evaluation of system outputs. The approach is able to quantify the pairwise differences between two outputs and to unravel in detail what the differences consist of. We apply our approach to three tasks in Computational Linguistics, i.e. POS tagging, dependency parsing, and coreference resolution. We find that system outputs are more distinct than the (often) small differences in evaluation scores seem to suggest.", |
| "pdf_parse": { |
| "paper_id": "E17-1018", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper proposes a generic method for the comparative evaluation of system outputs. The approach is able to quantify the pairwise differences between two outputs and to unravel in detail what the differences consist of. We apply our approach to three tasks in Computational Linguistics, i.e. POS tagging, dependency parsing, and coreference resolution. We find that system outputs are more distinct than the (often) small differences in evaluation scores seem to suggest.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "While there exist well-defined procedures to evaluate system outputs against manually annotated gold data for many tasks in Computational Linguistics, generally little effort and exploration goes into identifying and analysing the differences between the outputs themselves. System outputs are usually compared in the following manner: The standard evaluation protocol for many tasks consists of comparing a system output (the response) to a manual annotation of the same data (the key). The difference between the response and key is quantified by a similarity metric such as accuracy, and different system outputs are compared to each other by ranking their scores with respect to the similarity metric.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, comparing the scores of the similarity metric does not paint the full picture of the differences between the outputs, as we will demonstrate. There are hardly any principled or generic evaluation approaches that aim at comparing two or more system responses directly to investigate, highlight, and quantify their differences in detail. Closing this gap is desirable, because progress in many NLP tasks is often made in small steps, and it is often left unclear what the specific contribution of a novel approach is if the comparison to related work is solely based on a (sometimes marginally) small improvement in F1 score or accuracy. Furthermore, an overall improvement regarding accuracy achieved by a new approach might come at the cost of failing in some areas where a baseline system was correct. Vice versa, a new approach might not improve overall accuracy, but solve particular problems that no other system has been able to address.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We propose an evaluation approach which aims at shedding light on the particular differences between system responses and which is intended as a complement to evaluation metrics such as F1score and accuracy. By doing so, we strive to provide researchers with a tool that is able to give insight into the particular strengths and weaknesses of their system in comparison to others. 1 Our method is also useful in iterative system development, as it tracks changes in the outputs of different system versions or feature sets. Furthermore, our approach is able to compare multiple system outputs at once, which enables it to identify hard (or easy) problem areas by assessing how many of the systems solve a problem correctly and give according upper bounds for system ensembles. The performance difference between the simulated ensemble and the individual systems serves as an additional indicator of the difference between the system outputs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We exemplify the application of our approach by aiming to answer the question of how (dis)similar are the outputs of several state-of-theart systems for different tasks in NLP. We first motivate why using evaluation metrics such as accuracy is not suited for comparing outputs (next section). We then propose a method to do so which introduces an inventory to systematically classify and quantify output differences (section 2). Next, we demonstrate how combining a set of outputs can be used to identify their divergence and to identify hard (and easy) problem areas by looking at upper bounds in performance achieved by an oracle output combination (section 3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "First let us motivate why comparing accuracy or F1 scores is not a suited method for establishing the (dis)similarity of system outputs. Consider a simple synthetic problem set with four test cases {A, B, C, D} (e.g. a sequence of POS tags). A system response S 1 solves correctly the cases A and B, while a system response S 2 returns the correct answers for the cases C and D. In terms of accuracy, both responses achieve identical scores, i.e. 50%. However, their output is maximally dissimilar. Extending the set of cases, assume five problems, {A, B, C, D, E}, and three responses S 1 , S 2 , and S 3 as shown in table 1. Although the three responses achieve the same accuracy (left table), their pairwise overlap in terms of identical correct responses (right table) varies considerably, i.e. S 1 is much more similar to S 2 (two shared answers) than to S 3 (one shared answer). In fact, the establishment of the similarity of the responses S 1 , S 2 , and S 3 is more complicated, because we have left out the overlap of the incorrect answers in the responses. Consider the full responses in table 2. The overlap metric (right table) now compares how many of the cells in two rows have identical answers, regardless of whether the answer is correct. The overlap-based similarities between the systems have become more diverse, i.e. S 1 and S 3 are more dissimilar than in the previous table, and the similarities of the pairs (S 1 , S 2 ) and (S 2 , S 3 ) are now distinct, because S 2 and S 3 share the error Z (beside the correct answers C and D), while S 1 and S 2 do not share an error.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 759, |
| "end": 772, |
| "text": "(right table)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Key A B C D E Acc. S 1 A B C 60% S 2 B C D 60% S 3 C D E 60% Overlap S 1 , S 2 67% S 2 , S 3 67% S 1 , S 3 33%", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Key A B C D E Acc. S 1 A B C X Y 60% S 2 Z B C D U 60% S 3 Z W C D E 60% Overlap S 1 , S 2 40% S 2 , S 3 60% S 1 , S 3 20%", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Hence, evaluating systems based on performance metrics such as accuracy and F1 scores provides no insight into the differences between the systems and is not able to accurately quantify the similarities between them. That is, a small difference in accuracy does not necessarily imply a high similarity of the outputs, and, vice versa, a larger difference in accuracy does not necessarily signify vastly dissimilar outputs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Moreover, evaluation based on scores in performance metrics such as F1 does not detail in what regard a system performs better than another. Two systems might implement very distinct approaches, but achieve very similar scores in evaluation. Based on e.g. F1, we cannot assert whether a response S 2 performs better than a response S 1 because a) it solves the same problems as S 1 and then some additional ones, or b) if S 2 and S 1 solve a quite diverse set of problems and S 2 happens to solve a few more in its area of expertise. Additionally, a system that performs better than a baseline is bound to make errors where the baseline was correct. The overall accuracies cannot tell us how often this is the case.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "In summary, the comparison of systems based on overall performance scores only lets us glimpse the proverbial tip of the iceberg. Therefore, our approach to comparative evaluation features three main points of interest:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "1. How are the differences between system responses quantifiable? 2. What is the nature of the difference between two responses? 3. How can we assess the divergence of a set of responses, how complementary are they?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "We try to answer these questions regarding three main tasks, namely POS tagging, dependency parsing, and coreference resolution. We select these tasks because they are fairly widespread procedures in Computational Linguistics and their evaluation increases in complexity. While we limit ourselves to these, we believe our approach to be generic enough to be applied to other labeling problems, such as named entity recognition and semantic role labeling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "As argued above, system responses differ both regarding the correct answers they give and the errors they make. The underlying idea of our approach is to assess how many of the labeled linguistic units (i.e. tokens) in the key have different labels in the responses, regardless of whether the labels are correct. 2 In a second step, we use a class inventory to analyse and quantify these differences in more detail. Formally, given a set of tokens T and two accompanying system responses S 1 and S 2 , we quantify how many of the tokens t i \u2208 T have a different label in S 1 and S 2 :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "diff (S 1 , S 2 | T ) = |\u2200t i \u2208 T : label(t i , S 1 ) = label(t i , S 2 )| |T |", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Note that switching the inequality condition ( =) to equality (=) actually yields the accuracy metric. That is, taking S 1 as the key and the S 2 as the response and calculating accuracy produces the inverted results of our metric, i.e. 1 \u2212 diff (S 1 , S 2 | T ), since accuracy is the ratio of tokens that have identical labels. The question is then, why not simply use S 1 as the key and S 2 as the response and calculate accuracy? While this answers whether two systems solve a similar or diverse set of problems, it does not enable us to identify the sources of the differences that drive the better performance of one response over the other. That is, if a token has a different label in S 1 and S 2 , we cannot tell which and if any of the responses is correct. Hence, we need to look at the gold labels of the tokens T in a key K. This enables us to categorise differences in the outputs into three distinct and informative classes 3 :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Correction: S 1 labels a token incorrectly, S 2 corrects this error \u2022 New error: S 1 is correct, S 2 introduces an error \u2022 Changed error: Both S 1 and S 2 are incorrect but have different labels", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The general algorithm to quantify differences in two responses S 1 and S 2 given a set of tokens t i...n in a key K is outlined in algorithm 1. This procedure lets us track and count how often S 2 has a different label than S 1 , classify the difference, and calculate the percentage of each class of difference. The approach can be applied straightforwardly to comparing outputs of POS taggers and dependency parsers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Algorithm 1 Track differences in two responses Input: Key K tokens t i...n , Responses S 1 , S 2 Output: Difference D, Changes C 1: for t i \u2208 K do 2: G = label(t i , K)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "L 1 = label(t i , S 1 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "4:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "L 2 = label(t i , S 2 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "5:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "T okCnt + + 6: if L 1 = L 2 then 7: if L 2 = G then 8: C[correction][L 1 , L 2 ] + + 9:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "else if L 1 = G then 10:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "C[new error][L 1 , L 2 ] + + 11:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "else 12:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "C[changed error][L 1 , L 2 ] + + 13: DiffLabel + + 14: D = DiffLabel T okCnt 15: return D, C", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantifying differences in system responses", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We compare three POS taggers than can be used off-the-shelf to tag German: the Stanford POS Tagger (Toutanova et al., 2003) , the TreeTagger (Schmid, 1995) , and the Clevertagger (Sennrich et al., 2013 , state-of-the-art). Following Sennrich et al. 2013, we use 3000 sentences from the T\u00fcbaD/Z (Telljohann et al., 2004) , a corpus of articles from a German newspaper, as a test set. 4 Table 3 shows the labeling accuracy of the POS taggers and the percentage of correctly tagged sentences. The accuracy improvement of Clevertagger over TreeTagger is +1.27 points, and the percentage of correctly tagged sentences increases substantially (+9.9 points). In comparison to the Stanford tagger 5 , Clevertagger raises performance Table 3 : Accuracy and differences between POS taggers by roughly 6 points in accuracy, but almost doubles the numbers of correctly tagged sentences.", |
| "cite_spans": [ |
| { |
| "start": 99, |
| "end": 123, |
| "text": "(Toutanova et al., 2003)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 141, |
| "end": 155, |
| "text": "(Schmid, 1995)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 179, |
| "end": 201, |
| "text": "(Sennrich et al., 2013", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 294, |
| "end": 319, |
| "text": "(Telljohann et al., 2004)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 385, |
| "end": 392, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 725, |
| "end": 732, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "POS tagging", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In the lower table, we see that although the accuracy difference puts the Stanford tagger closer to the TreeTagger (4.48) than to the Clevertagger (5.75), the Stanford tagger's response is more different from the one of TreeTagger (11.06) than from the response of Clevertagger (9.41). Comparing the two best performing taggers, we see that despite their accuracy difference of only 1.27 points, they label 4.96% of the tokens differently.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "POS tagging", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "To get a more detailed understanding of the differences, we apply algorithm 1 to the two outputs, whose results are shown in table 4 6 , listing the five most frequent changes per difference class. 7 Of the 4.96% different labels in Clevertagger compared to TreeTagger, 58.71% are corrections, 33.13% are new errors, and 8.15% changed errors. 8 That is, one third of the changes that Clevertagger introduces are errors. This is a noteworthy observation which applies to all our system comparisons: Every improved response introduces a considerable amount of errors with respect to the baseline, i.e. it invalidates correct decisions of the baseline. While this observation is to some degree expected, our method is able to quantify and analyse such changes in detail.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "POS tagging", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Regarding the differences, we see that both the most frequent correction (NN\u2192NE) and new error (NE\u2192NN) evolve around the confusion of named entities and common nouns, which is especially ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "POS tagging", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The next task we investigate is dependency parsing. We choose English as the test language due to the lack of the availability of multiple parsers in other languages. We evaluate Google's recently released Parsey McParseface (Andor et al., 2016 , state-of-the-art) and two versions of the Stanford parser, i.e. the PCFG and the Neural Network version (Chen and Manning, 2014) . We follow the standard evaluation protocol and use section 23 of the PennTreebank (Marcus et al., 1993) as a test set and exclude punctuation tokens. We evaluate on Stanford Dependency labels (de Marneffe and Man-ning, 2008), since Parsey support only them. 9 We apply the parsers and their models \"as is\", i.e. we do not change any configuration settings. Table 5 : Parser performance and difference", |
| "cite_spans": [ |
| { |
| "start": 225, |
| "end": 244, |
| "text": "(Andor et al., 2016", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 351, |
| "end": 375, |
| "text": "(Chen and Manning, 2014)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 460, |
| "end": 481, |
| "text": "(Marcus et al., 1993)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 636, |
| "end": 637, |
| "text": "9", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 735, |
| "end": 742, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dependency parsing", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In table 5, we report the unlabeled attachment score (UAS), the labeling score (LS), and the labeled attachment score (LAS) for the parsers. Furthermore, we evaluate how many of the sentences are fully parsed correctly given each criterion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "UAS", |
| "sec_num": null |
| }, |
| { |
| "text": "We see that Parsey outperforms the Stanford parsers mainly due to the performance in attachment (UAS). The performance differences in assigning grammatical labels (LS) are comparably marginal. Parsey also features almost identical performance in both attaching and labeling tokens. However, there is a gap compared to labeled attachment score, which indicates that although Parsey attaches more tokens correctly than the other parsers, it does not necessarily assign the correct grammatical label to these tokens. Looking at the difference chart, we see that despite the rather small differences in LAS (1-4 points), the parsers attach and label around 15% of the tokens differently. The Stanford parsers only differ in 1.07 points in LAS, but this difference is based on 14.01% (diff ) of the tokens in the test set. Parsey outperforms the Stanford NN parser by 2.51 LAS based on 13.62% of the tokens. To gain a better understanding of the differences contained in these 13.62% of the tokens, we apply algorithm 1, whose output is shown in ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "UAS", |
| "sec_num": null |
| }, |
| { |
| "text": "The final task we investigate is coreference resolution. We choose three freely available systems for English, again due to the lack of available systems for other languages: the Stanford statistical coreference resolver (Clark and Manning, 2015 , stateof-the-art), HOTCoref , and the Berkeley coreference system (Durrett and . We use the CoNLL 2012 shared task test set (Pradhan et al., 2012) . The coreference task differs from the previous two, since not all tokens in a document partake in coreference relations (but all tokens are in syntactic relations and feature a POS tag). Furthermore, the linguistic units of coreference relations are not only single word tokens, but syntactic units called mentions (i.e. mostly noun phrases). Therefore, we have to adapt our similarity metric in equation 1. To quantify the difference of two corefer-ence system outputs S 1 and S 2 , given a key K, we count how many of the mentions m are classified differently using a mention classification function c:", |
| "cite_spans": [ |
| { |
| "start": 221, |
| "end": 245, |
| "text": "(Clark and Manning, 2015", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 371, |
| "end": 393, |
| "text": "(Pradhan et al., 2012)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference resolution", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "diff (S 1 , S 2 | K) = |\u2200m \u2208 S 1 \u2229 S 2 \u2229 K : c(m, S 1 ) = c(m, S 2 )| |\u2200m \u2208 S 1 \u2229 S 2 \u2229 K|", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Coreference resolution", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The mention classification function c requires a class inventory which is not featured by the common evaluation metrics for coreference resolution. 10 Therefore, we adapt the mention classification paradigm introduced in the ARCS framework for coreference resolution evaluation (Tuggener, 2014) which assigns one of the following four classes to a mention m given a key K and a system response S: However, one issue with ARCS is to determine a criterion for the TP class, i.e. under what circumstances is m regarded as resolved correctly. Tuggener (2014) proposed to determine correct antecedents based on the requirements of prospective downstream applications. 11 We implement one loose criterion and regard m as correctly resolved if any of its antecedents in S is also an antecedent of m in K. Conversely, if none of the antecedents of m overlap in S and K, we label m as WL. This yields the ARCS any metric. Alternatively, we require that the closest preceding nominal antecedent of m in S is also an antecedent of m in K, which yields the ARCS nom metric. This metric is more conservative in assigning the TP class, but implements a more realistic criterion for 10 The common metrics analyse either the links between mentions or calculate a percentage of overlapping mentions in coreference chains in the key and a response. They are not able to determine whether a given mention m is resolved correctly or assign a class to it.", |
| "cite_spans": [ |
| { |
| "start": 278, |
| "end": 294, |
| "text": "(Tuggener, 2014)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 539, |
| "end": 554, |
| "text": "Tuggener (2014)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 663, |
| "end": 665, |
| "text": "11", |
| "ref_id": null |
| }, |
| { |
| "start": 1168, |
| "end": 1170, |
| "text": "10", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference resolution", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "11 Machine translation requires pronouns to be linked to nominal antecedents, Sentiment analysis needs Named Entity antecedents (if available) etc. correct antecedents from the perspective of downstream applications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference resolution", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The official CoNLL score MELA (average of MUC, CEAFE, and BCUB) and the recently proposed LEA metric (Moosavi and Strube, 2016), which addresses several issues of of the other metrics, as well as the ARCS scores, are given in table 7. Using the ARCS class inventory and equation 2, we quantify how many of the mentions are classified differently in the system responses. The F1 scores are lowest for the LEA metric, because it gives more weight to errors regarding longer coreference chains. The ARCS any metric assigns the highest scores due to the loose criterion that any antecedent is correct as long as it is in the key chain of a given mention. Furthermore, all the metrics agree on the ranking of the systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference resolution", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The mention-based differences between the systems are considerably larger than the relatively small differences in F1 scores suggest. The Stanford systems outperforms HOTCoref by 2.3 MELA, 3.79 LEA F1, and 3.03 ARCS any F1, but the systems process one fourth (26.30%) of the mentions differently in the ARCS any setting. For the ARCS nom criterion, the differences are even larger. The Stanford system outperforms the Berkeley system by 2.91 ARCS nom F1, but the systems process 35.39% of the mentions differently. Furthermore, we observe that the differences in F1 (\u2206 F1) do not correlate with the differences of the outputs (diff ) for both ARCS metrics. Given ARCS nom , we see that the smallest difference in F1 (Stanford \u2194 HOTCoref: 0.09) actually occurs between the two responses that the diff metric deems most dissimilar (37.45).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference resolution", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Finally, we apply algorithm 1, using the ARCS nom criterion and our mention classification scheme to the two best performing systems, i.e. We see that less than 50% of the changes that the Stanford system introduces are corrections (44.62%). But this percentage is still higher than the newly introduced errors (41.65%); hence the improvement in overall F1. Furthermore, the most frequent change is wrong linkages to true positives (wl \u2192 tp). The most frequent new error also involves true mentions, i.e. attaching correctly resolved mentions to incorrect antecedents (tp \u2192 wl). Recovering false negatives and rendering true positives to false negatives occurs equally frequent, roughly. Hence, the performance difference stems mainly from attaching anaphoric mention to (nominal) antecedents, rather than from deciding which mentions to resolve, which are two subproblems in coreference resolution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference resolution", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Lastly, we combine the system outputs per task and calculate the upper bounds for perfect system combinations by deeming a token labeled correctly if at least one of the systems provides the correct label. The upper bounds are intended to be another measure of the (dis)similarity of the outputs: the higher the upper bound, the higher the divergence of the outputs. Furthermore, looking at per-label performance of all systems, we can identify labels with low scores but high upper bounds, which is an interesting starting point for future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System combination", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We start with the POS tagging task and present the upper bound of the system combination if table 9. We also indicate the accuracy gains for the top ten most frequent POS tags relative to the best performing tagger (Clevertagger). Table 9 : POS tagging upper bounds and accuracy (highest scores in green; lowest in red; middle in yellow)", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 231, |
| "end": 238, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "POS tagging", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The Stanford tagger, despite performing the lowest with respect to overall accuracy, achieves the highest accuracy on named entities (NE), while the TreeTagger struggles in this category particularly. The TreeTagger surpasses the other taggers on finite verbs (VVFIN) by a wide margin and auxiliary finite verbs (VAFIN). Clevertagger performs best overall, but interestingly, it only achieves the highest accuracy on three of the ten most frequent POS tags. Looking at the overall upper bound, we see that it more than halves the error rate of the best performing system and is near 99% accuracy. The POS tags that profit most from the combination are named entities. Interestingly, all the taggers have low accuracy with respect to this tag, but the upper bound of the combination drastically raises it. Hence, it seems that the taggers diverge mostly here, which correlates with our analysis of the difference between the two best performing systems in table 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "POS tagging", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Next, we analyse the upper bounds of the combination of the dependency parsers, given in table 10. in contrast to the POS tagging task, we find that the best performing system, Parsey (P-MP) achieves highest LAS for almost all considered labels. Still, its overall LAS is drastically increased by the upper bound (+5.99) of the perfect system combination. Two of the labels that benefit the most of the combination are amod (adjec-tival modifier), which is often confused with nn (noun compound modifier) as we saw in table 6, and advmod (adverb modifier). All parsers have below 90 LAS for these labels, but the combination raises performance to 95.26 and 91.40, respectively. Furthermore, prepositions (prep) gains considerably in LAS in the combination. We observed in table 6 that almost ten percent of the difference between Parsey and the Stanford NN parser stem from correcting attachments of prepositions. However, also more than seven percent of the difference stems from invalidating correctly attached prepositions in the Stanford NN output. The large performance jump in the combination of the systems is further evidence that the parsers are highly complementary with respect to prepositions. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency parsing", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "For the coreference task, it is not trivial to calculate the F1 upper bound of the response combination, as the systems do no feature the same mentions in their outputs 12 , and disentangling the false positives is a cumbersome undertaking. Therefore, we limit our investigation to the gold mentions in the key and count for how many of them at least one of the responses produces a correct nominal antecedent, which yields the upper bound for ARCS nom recall. To gain a deeper insight into the benefits of the combination and the performance of the systems, we divide the mentions into nouns (named entities and common nouns), personal pronouns (PRP), and possessive pronouns (PRP$). Results are given in table 11. 13 The system with overall best recall features the highest recall with respect to all mention types. 12 The systems have to decide which NPs they consider for coreference resolution (the anaphoricity detection problem). I.e. the mentions are not known beforehand, and the systems ", |
| "cite_spans": [ |
| { |
| "start": 716, |
| "end": 718, |
| "text": "13", |
| "ref_id": null |
| }, |
| { |
| "start": 818, |
| "end": 820, |
| "text": "12", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference resolution", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "One way to establish the difference of two system outputs is to apply statistical significance tests. However, there is generally little agreement on which test to use, and it is often not trivial to verify if all criteria are met for the application of a specific test to a given data set (Yeh, 2000) . Furthermore, the significance tests provide no insight into the nature of the differences between two outputs. Several survey papers analysed performance of state-of-the-art tools for POS tagging (Volk and Schneider, 1998; Giesbrecht and Evert, 2009; Horsmann et al., 2015) or dependency parsing (McDonald and Nivre, 2007) . While these surveys provide performance results along different axes (accuracy, time, domain, frequent errors), they do not analyse the particular differences between the system responses on the token level and hence do not provide a (dis)similarity rating of the responses. Regarding dependency parsing, our work is most closely related to McDonald and Nivre (2007) and Seddah et al. (2013) . Both papers analyse the performance of parsers with respect to several subproblems. McDonald and Nivre (2007) also performed output combination experiments to stress that the two parsers that they investigated are complementary to a significant degree.", |
| "cite_spans": [ |
| { |
| "start": 290, |
| "end": 301, |
| "text": "(Yeh, 2000)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 500, |
| "end": 526, |
| "text": "(Volk and Schneider, 1998;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 527, |
| "end": 554, |
| "text": "Giesbrecht and Evert, 2009;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 555, |
| "end": 577, |
| "text": "Horsmann et al., 2015)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 600, |
| "end": 626, |
| "text": "(McDonald and Nivre, 2007)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 970, |
| "end": 995, |
| "text": "McDonald and Nivre (2007)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1000, |
| "end": 1020, |
| "text": "Seddah et al. (2013)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1107, |
| "end": 1132, |
| "text": "McDonald and Nivre (2007)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Comparative system evaluation in shared tasks is usually performed by pitting scores in evaluawill hallucinate different incorrect ones. tion metrics against each other, e.g. the CoNLL shared tasks on coreference (Pradhan et al., 2011; Pradhan et al., 2012) or on dependency parsing (Buchholz and Marsi, 2006; Nilsson et al., 2007) . While the post task evaluation of the CoNLL shared task 2007 included an experiment of system combination which showed performance improvements, it is generally left unclear how similar are the system outputs with (sometimes marginally) small differences with respect to the evaluation metrics.", |
| "cite_spans": [ |
| { |
| "start": 213, |
| "end": 235, |
| "text": "(Pradhan et al., 2011;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 236, |
| "end": 257, |
| "text": "Pradhan et al., 2012)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 283, |
| "end": 309, |
| "text": "(Buchholz and Marsi, 2006;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 310, |
| "end": 331, |
| "text": "Nilsson et al., 2007)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Another branch of evaluation related to our work is error analysis. G\u00e4rtner et al. (2014) presented a tool to explore coreference errors visually, but does not aggregate and classify them. Kummerfeld and Klein (2013) devised a set of error classes for coreference and analysed quantitatively which systems make which errors. Martschat and Strube (2014) presented an analysis and grouping of recall errors for coreference and evaluated a set of system responses. However, these analyses focus on the errors of one system at a time and then compare the overall error statistics, i.e. there is no direct linking or combination of the responses. Hence, we believe our approach to be complementary to the work outlined above.", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 89, |
| "text": "G\u00e4rtner et al. (2014)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 325, |
| "end": 352, |
| "text": "Martschat and Strube (2014)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We have presented a generic dissimilarity metric for system outputs and applied it to several systems for POS tagging, dependency parsing, and coreference resolution. We found that systems with marginal differences in accuracy scores or F1 actually have considerably distinct outputs. We combined system outputs and calculated upper bounds in performance as an additional measure of the degree of difference between the outputs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We discussed and applied a method for analysing the specific differences between two system outputs using a class inventory to label and quantify the differences. Our analysis revealed the (often considerable) quantity of new errors that improvements introduce compared to baselines. We believe that this kind of analysis is also useful during system and method design, as it allows one to track all changes in the output when adjusting a system or a feature set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "While we have explored our approach on three core tasks in Computational Linguistics, we believe it to be applicable to other areas in the field. Our hope is that our method of comparative evalu-ation will motivate other researchers to gain an indepth understanding of the output of their systems and what distinguishes them from others, beyond differences in accuracy or F1 scores. Table 12 : Largest accuracy differences between TreeTagger (TT) and Clevertagger(CT); number of token with POS tag in the test set (#tok)", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 383, |
| "end": 391, |
| "text": "Table 12", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Since the ARCS framework is relatively unknown and not widely used, we revisit the connection of our dif f metric to accuracy and F1 outlined in section 2 in order to use one of the coreference metrics to establish the differences between the outputs. We saw that our metric is inversely equivalent to accuracy when taking one system response as the key and the other as the response. That is, we can calculate the dif f ratio by 1 \u2212", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A.2 Coreference resolution", |
| "sec_num": null |
| }, |
| { |
| "text": "|t i \u2208T :label(t i ,S 1 )==label(t i ,S 2 )| |T |", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A.2 Coreference resolution", |
| "sec_num": null |
| }, |
| { |
| "text": ", which is equivalent to taking S 1 as the key and S 2 as the response (or vice versa). For the coreference task, we can thus use one response as the key and the other as the response. The resulting F1 score can then be used as an agreement value, which, however, does not provide any detailed analysis of the nature of the differences compared to the ARCS approach. Table 13 shows the F1 scores when using one response as the key and the second as response. Note that switching the key and the response role provides the same F1 scores for two responses; the only effect is that the recall and precision values are switched.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 367, |
| "end": 375, |
| "text": "Table 13", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A.2 Coreference resolution", |
| "sec_num": null |
| }, |
| { |
| "text": "The table shows that using this approach, we obtain F1 scores that give quite high dissimilarities when turned into the dif f metric, i.e. dif f = 100 \u2212 F 1. The average of the dif f metric given MELA F1 is 28.90 (100 \u2212 71.10); given LEA F1 it is 35.22 (100\u221264.78 ", |
| "cite_spans": [ |
| { |
| "start": 247, |
| "end": 263, |
| "text": "35.22 (100\u221264.78", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A.2 Coreference resolution", |
| "sec_num": null |
| }, |
| { |
| "text": "Code available at: https://github.com/ dtuggener/ComparEval", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In other words, the complement of the overlap metric in tables 1 and 2.3 To motivate the nomenclature, we assume that S1 is e.g. a baseline upon which S2 tries to improve. However, the outputs can stem from any two systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We change the POS tag for pronominal adverbs from PROP to PROAV in the test set, since all taggers feature only the latter tag.5 The most frequent error by the Stanford tagger is labeling some punctuation tokens (e.g. '-') as '$[' instead of '$('. Considering it a minor error, we replace all '$[' labels in the Stanford response with '$(', increasing accuracy from 86.60 to 90.41%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For tag description see http://www.ims. uni-stuttgart.de/forschung/ressourcen/ lexika/TagSets/stts-table.html 7 Note that the change comparison can also be sorted by the biggest accuracy difference, cf. appendix table A.1.8 We here use the TreeTagger as S1 and the Clevertagger as S2. Inverting the roles does not change the percentage values, but simply switches corrections to new errors and vice versa.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We convert the PennTreebank to Stanford dependencies using the Penn Treebank converter included in the Stanford parser (http://nlp.stanford.edu/ software/stanford-dependencies.shtml).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that the HOTCoref system has better recall than the Stanford system, but the Stanford system features better precision, which leads to a higher F1 score in table 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Globally Normalized Transition-Based Neural Networks", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Andor", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Alberti", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "Aliaksei", |
| "middle": [], |
| "last": "Severyn", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Presta", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuzman", |
| "middle": [], |
| "last": "Ganchev", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "2442--2452", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally Nor- malized Transition-Based Neural Networks. In Pro- ceedings of the 54th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers), pages 2442-2452, Berlin, Germany. Google.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Learning Structured Perceptrons for Coreference Resolution with Latent Antecedents and Non-local Features", |
| "authors": [ |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Bj\u00f6rkelund", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonas", |
| "middle": [], |
| "last": "Kuhn", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "47--57", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anders Bj\u00f6rkelund and Jonas Kuhn. 2014. Learning Structured Perceptrons for Coreference Resolution with Latent Antecedents and Non-local Features. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 47-57, Baltimore.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "CoNLL-X Shared Task on Multilingual Dependency Parsing", |
| "authors": [ |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Buchholz", |
| "suffix": "" |
| }, |
| { |
| "first": "Erwin", |
| "middle": [], |
| "last": "Marsi", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning, CoNLL-X '06", |
| "volume": "", |
| "issue": "", |
| "pages": "149--164", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X Shared Task on Multilingual Dependency Parsing. In Proceedings of the Tenth Conference on Com- putational Natural Language Learning, CoNLL-X '06, pages 149-164.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A Fast and Accurate Dependency Parser using Neural Networks", |
| "authors": [ |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "740--750", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Danqi Chen and Christopher Manning. 2014. A Fast and Accurate Dependency Parser using Neural Net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Entity-Centric Coreference Resolution with Model Stacking", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1405--1415", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Clark and Christopher D Manning. 2015. Entity-Centric Coreference Resolution with Model Stacking. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1405-1415, Beijing, China.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "The Stanford Typed Dependencies Representation", |
| "authors": [ |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Coling 2008: Proceedings of the Workshop on Cross-Framework and Cross-Domain Parser Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marie-Catherine de Marneffe and Christopher D Man- ning. 2008. The Stanford Typed Dependencies Rep- resentation. In Coling 2008: Proceedings of the Workshop on Cross-Framework and Cross-Domain Parser Evaluation, pages 1-8.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Easy victories and uphill battles in coreference resolution", |
| "authors": [ |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1971--1982", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1971-1982, Seattle, Washington, USA.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Visualization, Search, and Error Analysis for Coreference Annotations", |
| "authors": [ |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "G\u00e4rtner", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Bj\u00f6rkelund", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregor", |
| "middle": [], |
| "last": "Thiele", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Seeker", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonas", |
| "middle": [], |
| "last": "Kuhn", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "7--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Markus G\u00e4rtner, Anders Bj\u00f6rkelund, Gregor Thiele, Wolfgang Seeker, and Jonas Kuhn. 2014. Visu- alization, Search, and Error Analysis for Corefer- ence Annotations. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 7-12, Balti- more, Maryland.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Is Partof-Speech Tagging a Solved Task? An Evaluation of POS Taggers for the German Web as Corpus", |
| "authors": [ |
| { |
| "first": "Eugenie", |
| "middle": [], |
| "last": "Giesbrecht", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Evert", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 5th Web as Corpus Workshop (WAC5)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugenie Giesbrecht and Stefan Evert. 2009. Is Part- of-Speech Tagging a Solved Task? An Evaluation of POS Taggers for the German Web as Corpus. In Proceedings of the 5th Web as Corpus Workshop (WAC5), San Sebastian, Spain.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Fast or Accurate? -A Comparative Evaluation of PoS Tagging Models", |
| "authors": [ |
| { |
| "first": "Tobias", |
| "middle": [], |
| "last": "Horsmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicolai", |
| "middle": [], |
| "last": "Erbs", |
| "suffix": "" |
| }, |
| { |
| "first": "Torsten", |
| "middle": [], |
| "last": "Zesch", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the International Conference of the German Society for Computational Linguistics and Language Technology (GSCL-2015)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tobias Horsmann, Nicolai Erbs, and Torsten Zesch. 2015. Fast or Accurate? -A Comparative Evalu- ation of PoS Tagging Models. In Proceedings of the International Conference of the German Society for Computational Linguistics and Language Technol- ogy (GSCL-2015), Essen, Germany.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Accurate unlexicalized parsing", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "423--430", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Klein and Christopher D. Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics, volume 1, pages 423-430.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Error-Driven Analysis of Challenges in Coreference Resolution", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Jk Jonathan", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Kummerfeld", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "265--277", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "JK Jonathan K Kummerfeld and Dan Klein. 2013. Error-Driven Analysis of Challenges in Coreference Resolution. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Pro- cessing, pages 265-277, Seattle.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Building a Large Annotated Corpus of English: The Penn Treebank", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [ |
| "Ann" |
| ], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Marcinkiewicz", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a Large Anno- tated Corpus of English: The Penn Treebank. Com- putational Linguistics, 19(2):313-330, jun.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Recall error analysis for coreference resolution", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Martschat", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Strube", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2070--2081", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastian Martschat and Michael Strube. 2014. Recall error analysis for coreference resolution. In Pro- ceedings of the 2014 Conference on Empirical Meth- ods in Natural Language Processing, pages 2070- 2081, Doha.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Characterizing the Errors of Data-Driven Dependency Parsing Models", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "122--131", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald and Joakim Nivre. 2007. Character- izing the Errors of Data-Driven Dependency Pars- ing Models. In Proceedings of the 2007 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 122-131.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Which Coreference Evaluation Metric Do You Trust? A Proposal for a Link-based Entity Aware Metric", |
| "authors": [ |
| { |
| "first": "Sadat", |
| "middle": [], |
| "last": "Nafise", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Moosavi", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Strube", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nafise Sadat Moosavi and Michael Strube. 2016. Which Coreference Evaluation Metric Do You Trust? A Proposal for a Link-based Entity Aware Metric. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguis- tics, volume 1.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "The CoNLL 2007 shared task on dependency parsing", |
| "authors": [ |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Deniz", |
| "middle": [], |
| "last": "Yuret", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the CoNLL shared task session of EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "915--932", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on dependency pars- ing. In Proceedings of the CoNLL shared task ses- sion of EMNLP-CoNLL, pages 915-932.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "CoNLL-2011 Shared Task: Modeling Unrestricted Coreference in OntoNotes", |
| "authors": [ |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Lance", |
| "middle": [], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. CoNLL-2011 Shared Task: Modeling Unrestricted Coreference in OntoNotes. In Proceed- ings of the Fifteenth Conference on Computational Natural Language Learning, pages 1-27, Portland.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sameer Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuchen", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Sixteenth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 Shared Task: Modeling Multilingual Unre- stricted Coreference in OntoNotes. In Proceedings of the Sixteenth Conference on Computational Nat- ural Language Learning, Jeju.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Improvements In Part-of-Speech Tagging With an Application To German", |
| "authors": [ |
| { |
| "first": "Helmut", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the ACL SIGDAT-Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "47--50", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Helmut Schmid. 1995. Improvements In Part-of- Speech Tagging With an Application To German. In In Proceedings of the ACL SIGDAT-Workshop, pages 47-50.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Overview of the SPMRL 2013 Shared Task: A Cross-Framework Evaluation of Parsing Morphologically Rich Languages", |
| "authors": [ |
| { |
| "first": "Djame", |
| "middle": [], |
| "last": "Seddah", |
| "suffix": "" |
| }, |
| { |
| "first": "Reut", |
| "middle": [], |
| "last": "Tsarfaty", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "Kuebler", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie", |
| "middle": [], |
| "last": "Candito", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinho", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Farkas", |
| "suffix": "" |
| }, |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| }, |
| { |
| "first": "Iakes", |
| "middle": [], |
| "last": "Goenaga", |
| "suffix": "" |
| }, |
| { |
| "first": "Koldo", |
| "middle": [], |
| "last": "Gojenola", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Spence", |
| "middle": [], |
| "last": "Green", |
| "suffix": "" |
| }, |
| { |
| "first": "Nizar", |
| "middle": [], |
| "last": "Habash", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Maier", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Przepiorkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Seeker", |
| "suffix": "" |
| }, |
| { |
| "first": "Yannick", |
| "middle": [], |
| "last": "Versley", |
| "suffix": "" |
| }, |
| { |
| "first": "Veronika", |
| "middle": [], |
| "last": "Vincze", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Fourth Workshop on Statistical Parsing of Morphologically", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Djame Seddah, Reut Tsarfaty, Sandra Kuebler, Marie Candito, Jinho Choi, Richard Farkas, Jennifer Fos- ter, Iakes Goenaga, Koldo Gojenola, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi- orkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Wolin ski, Alina Wroblewska, and Eric Villemonte de la Clergerie. 2013. Overview of the SPMRL 2013 Shared Task: A Cross-Framework Evaluation of Parsing Morpho- logically Rich Languages. In Fourth Workshop on Statistical Parsing of Morphologically Rich Lan- guages.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Exploiting Synergies Between Open Resources for German Dependency Parsing, POStagging, and Morphological Analysis", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Volk", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerold", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "601--609", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Martin Volk, and Gerold Schneider. 2013. Exploiting Synergies Between Open Re- sources for German Dependency Parsing, POS- tagging, and Morphological Analysis. In Proceed- ings of the International Conference on Recent Ad- vances in Natural Language Processing, pages 601- 609, Hissar.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "The T\u00fcba-D/Z Treebank: Annotating German with a Context-Free Backbone", |
| "authors": [ |
| { |
| "first": "Heike", |
| "middle": [], |
| "last": "Telljohann", |
| "suffix": "" |
| }, |
| { |
| "first": "Erhard", |
| "middle": [], |
| "last": "Hinrichs", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "2229--2232", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heike Telljohann, Erhard Hinrichs, and Sandra K\u00fcbler. 2004. The T\u00fcba-D/Z Treebank: Annotating German with a Context-Free Backbone. In Proceedings of the Fourth International Conference on Language Resources and Evaluation, pages 2229-2232, Lis- bon.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Feature-rich Part-ofspeech Tagging with a Cyclic Dependency Network", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", |
| "volume": "1", |
| "issue": "", |
| "pages": "173--180", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Toutanova, Dan Klein, Christopher D Man- ning, and Yoram Singer. 2003. Feature-rich Part-of- speech Tagging with a Cyclic Dependency Network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology -Volume 1, pages 173-180.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Coreference Resolution Evaluation for Higher Level Applications", |
| "authors": [ |
| { |
| "first": "Don", |
| "middle": [], |
| "last": "Tuggener", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "231--235", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Don Tuggener. 2014. Coreference Resolution Eval- uation for Higher Level Applications. In Proceed- ings of the 14th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 231-235, Gothenburg.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Comparing a statistical and a rule-based tagger for German", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Volk", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerold", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of KONVENS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martin Volk and Gerold Schneider. 1998. Comparing a statistical and a rule-based tagger for German. In Proceedings of KONVENS.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "More Accurate Tests for the Statistical Significance of Result Differences", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Yeh", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 18th Conference on Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "947--953", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander Yeh. 2000. More Accurate Tests for the Statistical Significance of Result Differences. In Proceedings of the 18th Conference on Computa- tional Linguistics -Volume 2, pages 947-953.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "True Positive (TP): m is correctly resolved to an antecedent. \u2022 False Positive (FP): m has no antecedent in K but one in S. \u2022 False Negative (FN): m has no antecedent in S but one in K. \u2022 Wrong Linkage (WL): m has an antecedent in K but is assigned an incorrect antecedent in S." |
| }, |
| "TABREF0": { |
| "html": null, |
| "text": "Accuracy vs. Overlap on correct answers", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF1": { |
| "html": null, |
| "text": "Accuracy vs. Overlap on all answers", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "html": null, |
| "text": "Token-based label changes comparing TreeTagger \u2192 Clevertagger (and Key \u2192 TreeTagger \u2192 Clevertagger for changed errors) difficult for German, since capitalization cannot be exploited to distinguish the two. Furthermore, Clevertagger frequently invalidates Tree-Tagger's correct labeling of finite verbs, tagging them as nonfinite (VVFIN \u2192 VVINF), although this change occurs under the most frequent corrections as well. While these are commonly known error sources for POS tagging German, our approach shows that they are in fact the main source of the differences between the output of the two top performing taggers.", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "html": null, |
| "text": "LS LAS Sent Stan. PCFG 87.96 92.26 85.36 24.17", |
| "content": "<table><tr><td>Stan. NN</td><td colspan=\"3\">88.68 92.45 86.43 26.95</td></tr><tr><td>Parsey</td><td colspan=\"3\">92.70 92.86 88.94 28.89</td></tr><tr><td/><td/><td colspan=\"2\">diff \u2206 LAS</td></tr><tr><td colspan=\"3\">Stan. PCFG \u2194 Stan. NN 14.01</td><td>1.07</td></tr><tr><td colspan=\"2\">Stan. PCFG \u2194 Parsey</td><td>15.49</td><td>3.58</td></tr><tr><td colspan=\"2\">Stan. NN \u2194 Parsey</td><td>13.62</td><td>2.51</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "html": null, |
| "text": "The table shows that half (50.22%) of the 13.62% changed token annotations from Stanford NN to Parsey are corrections. All of these changes are attachment corrections, i.e. the label of the tokens are not changed, which correlates with the small difference we saw in labeling score. The", |
| "content": "<table><tr><td colspan=\"2\">Difference: 13.62% (6776/49748)</td></tr><tr><td colspan=\"2\">Corrections: 50.22% (3403/6776)</td></tr><tr><td>nn \u2192 nn</td><td>10.93</td></tr><tr><td>prep \u2192 prep</td><td>9.49</td></tr><tr><td>cc \u2192 cc</td><td>5.32</td></tr><tr><td>conj \u2192 conj</td><td>4.17</td></tr><tr><td>advmod \u2192 advmod</td><td>2.59</td></tr><tr><td colspan=\"2\">New errors: 31.79% (2154/6776)</td></tr><tr><td>vmod \u2192 partmod</td><td>9.38</td></tr><tr><td>amod \u2192 nn</td><td>8.08</td></tr><tr><td>prep \u2192 prep</td><td>7.38</td></tr><tr><td>vmod \u2192 infmod</td><td>5.43</td></tr><tr><td>npadvmod \u2192 dep</td><td>4.32</td></tr><tr><td colspan=\"2\">Changed errors: 17.99% (1219/6776)</td></tr><tr><td>prep \u2192 prep \u2192 prep</td><td>5.00</td></tr><tr><td>vmod \u2192 vmod \u2192 partmod</td><td>2.95</td></tr><tr><td>advmod \u2192 advmod \u2192 advmod</td><td>1.97</td></tr><tr><td>cc \u2192 cc \u2192 cc</td><td>1.89</td></tr><tr><td>conj \u2192 conj \u2192 conj</td><td>1.23</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "html": null, |
| "text": "", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF9": { |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td>: Coreference resolution evaluation (F1)</td></tr><tr><td>and differences (%)</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF10": { |
| "html": null, |
| "text": "HOTCoref and Stanford. Results are given in table 8.", |
| "content": "<table><tr><td colspan=\"2\">Difference: 37.45% (5760 / 15382)</td></tr><tr><td colspan=\"2\">Corrections: 44.62% (2570/5760)</td></tr><tr><td>wl \u2192 tp</td><td>20.09</td></tr><tr><td>fn \u2192 tp</td><td>12.34</td></tr><tr><td>fp \u2192 tn</td><td>12.19</td></tr><tr><td colspan=\"2\">New errors: 41.65% (2399/5760)</td></tr><tr><td>tp \u2192 wl</td><td>17.08</td></tr><tr><td>tp \u2192 fn</td><td>12.33</td></tr><tr><td>tn \u2192 fp</td><td>12.24</td></tr><tr><td colspan=\"2\">Changed errors: 13.73% (791/5760)</td></tr><tr><td>fn \u2192 wl</td><td>3.87</td></tr><tr><td>wl \u2192 fn</td><td>9.86</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF11": { |
| "html": null, |
| "text": "", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF14": { |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td>: Parsing accuracy (LAS) and upper</td></tr><tr><td>bounds.</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF15": { |
| "html": null, |
| "text": "Berk. Stan. HOT. Upper bound Overall 55.72 58.13 59.34 73.39 +14.05 Nouns 59.67 59.76 60.99 73.48 +12.49 PRP 50.66 56.30 57.56 73.80 +16.24 PRP$ 62.36 64.62 65.98 80.57 +14.59", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF16": { |
| "html": null, |
| "text": "Mention-based coreference performance (ARCS nom recall) and upper bounds", |
| "content": "<table><tr><td>However, there is a considerable difference in re-</td></tr><tr><td>call to the mention types for all systems: Posses-</td></tr><tr><td>sive pronouns are more easily attached to correct</td></tr><tr><td>nominal antecedents than nouns and personal pro-</td></tr><tr><td>nouns. Furthermore, we see that upper bounds</td></tr><tr><td>raise recall uniformly for all mention types by a</td></tr><tr><td>considerable margin. This suggest that the outputs</td></tr><tr><td>are indeed different in several regards, which cor-</td></tr><tr><td>relates to the comparisons in tables 7 and 13.</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF18": { |
| "html": null, |
| "text": "). Compared to the ARCS any", |
| "content": "<table><tr><td>Key</td><td colspan=\"3\">Response LEA F1 MELA F1</td></tr><tr><td colspan=\"2\">Berkeley HOTCoref</td><td>63.58</td><td>70.32</td></tr><tr><td colspan=\"2\">Berkeley Stanford</td><td>66.03</td><td>71.91</td></tr><tr><td colspan=\"2\">Stanford HOTCoref</td><td>64.73</td><td>71.08</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF19": { |
| "html": null, |
| "text": "Coreference system comparison pairing responses average dif f (25.56) and the ARCS nom average dif f , 34.09, the values are in a similar range.", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |