ACL-OCL / Base_JSON /prefixI /json /I13 /I13-1012.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I13-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:14:41.629647Z"
},
"title": "Multilingual Mention Detection for Coreference Resolution",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Uryupina",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Trento",
"location": {
"country": "Italy"
}
},
"email": "uryupina@gmail.com"
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "QCRI",
"location": {
"addrLine": "Qatar Foundation",
"settlement": "Doha",
"country": "Qatar"
}
},
"email": "amoschitti@qf.org.qa"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes a novel algorithm for multilingual mention detection: we extract mentions from parse trees via kernelbased SVM learning. Our approach allows for straightforward mention detection for any language where (not necessary perfect) parsing resources are available, without any complex language-specific rule engineering. We also investigate possibilities for incorporating automatically acquired mentions into an end-to-end coreference resolution system. We evaluate our approach on the Arabic and Chinese portions of the CoNLL-2012 dataset, showing a significant improvement over the system with the baseline mention detection.",
"pdf_parse": {
"paper_id": "I13-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes a novel algorithm for multilingual mention detection: we extract mentions from parse trees via kernelbased SVM learning. Our approach allows for straightforward mention detection for any language where (not necessary perfect) parsing resources are available, without any complex language-specific rule engineering. We also investigate possibilities for incorporating automatically acquired mentions into an end-to-end coreference resolution system. We evaluate our approach on the Arabic and Chinese portions of the CoNLL-2012 dataset, showing a significant improvement over the system with the baseline mention detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Accurate mention detection (MD) is a vital prerequisite for a variety of Natural Language Processing tasks, in particular, for Relation Extraction (RE) and Coreference Resolution (CR). If a toolkit cannot extract mentions reliably, it will obviously be unable to assign them to relations or entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many studies on RE and CR report evaluation figures on gold mentions: in such a setting, a system is supplied with correct mention boundaries and/or semantic classes or other relevant properties. It can, in theory, be argued that such a methodology provides better insights on performance of RE and CR algorithms per se. It has been demonstrated, however, that evaluation results on gold mentions are misleading: for example, Ng (2008) shows that unsupervised CR algorithms exhibit promising results on gold mentions, that are not mirrored in a more realistic evaluation on automatically detected mentions.",
"cite_spans": [
{
"start": 426,
"end": 435,
"text": "Ng (2008)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The exact scope of the mention detection task varies considerably depending on the annotation guidelines. Thus, some corpora consider all the (non-embedding) NPs to be mentions, some corpora do not allow for non-referential mentions and some do not mark singleton referential mentions, that do not participate in coreference relations. In addition, some guidelines may restrict the annotation to specific semantic types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A number of linguistic studies focus on various syntactic, semantic and discourse clues that might help identify nominal constructions that cannot participate in coreference relations. Possible features include, among others, specific syntactic constructions for expletive pronouns, negation, modality and quantification (Karttunen, 1976) . Several algorithms have been proposed recently, trying to tackle some of the addressed phenomena within a computational approach. Thus, a number of algorithms have been developed recently to identify expletive usages of \"it\" (Evans, 2001; Boyd et al., 2005; Bergsma and Yarowsky, 2011) . While these approaches are potentially beneficial for mention detection in English, for other languages, neither theoretical nor computational studies are available at the moment. In this paper, we use tree kernels to extract relevant syntactic patterns automatically, without assuming any prior knowledge of the input language.",
"cite_spans": [
{
"start": 321,
"end": 338,
"text": "(Karttunen, 1976)",
"ref_id": "BIBREF12"
},
{
"start": 566,
"end": 579,
"text": "(Evans, 2001;",
"ref_id": "BIBREF6"
},
{
"start": 580,
"end": 598,
"text": "Boyd et al., 2005;",
"ref_id": "BIBREF1"
},
{
"start": 599,
"end": 626,
"text": "Bergsma and Yarowsky, 2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a learning-based solution to the mention detection task. We use SVMs (Joachims, 1999) with syntactic tree kernels (Collins and Duffy, 2001; Moschitti, 2008; Moschitti, 2006) to classify parse tree nodes as \u00b1mentions. Our approach does not require any language-or corpus-specific engineering and thus can be easily adapted to cover new languages or mention annotation schemes.",
"cite_spans": [
{
"start": 95,
"end": 111,
"text": "(Joachims, 1999)",
"ref_id": "BIBREF11"
},
{
"start": 140,
"end": 165,
"text": "(Collins and Duffy, 2001;",
"ref_id": "BIBREF2"
},
{
"start": 166,
"end": 182,
"text": "Moschitti, 2008;",
"ref_id": "BIBREF15"
},
{
"start": 183,
"end": 199,
"text": "Moschitti, 2006)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of paper is organized as follows. In the next section, we define the task and discuss our tree and vector representations. Section 4 presents MD evaluation figures. Finally, in Section 5 we incorporate our MD module into an end-to-end coreference resolution system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Until recently, most RE and CR toolkits have been evaluated on the ACE datasets (Doddington et al., 2004) . The ACE guidelines restrict possible mentions to be considered to specific semantic types (PERSON, LOCATION and so on). Moreover, mentions are annotated with their minimal and maximal span, allowing for relaxed matching between gold and automatically extracted boundaries. In such a setting, the mention detection task can be cast as a tagging problem, similar to the named entity recognition and classification task. A number of systems have followed this scenario, demonstrating reliable performance (Florian et al., 2004; Ittycheriah et al., 2003; Zitouni and Florian, 2008) .",
"cite_spans": [
{
"start": 80,
"end": 105,
"text": "(Doddington et al., 2004)",
"ref_id": "BIBREF5"
},
{
"start": 610,
"end": 632,
"text": "(Florian et al., 2004;",
"ref_id": "BIBREF7"
},
{
"start": 633,
"end": 658,
"text": "Ittycheriah et al., 2003;",
"ref_id": "BIBREF10"
},
{
"start": 659,
"end": 685,
"text": "Zitouni and Florian, 2008)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the past years, however, several corpora have been created from a more linguistic perspective: for example, the OntoNotes dataset (Hovy et al., 2006; Pradhan et al., 2012) provides annotation for unrestricted coreference. The guidelines differ significantly from the ACE scheme: mentions correspond to parse nodes and can be of any semantic type, the systems are expected to recover mention boundaries exactly. The OntoNotes mentionsunlike ACE ones-correspond to large NP structures (embedding NP nodes in gold parse trees), so a traditional approach (e.g., one of those mentioned above), which aims at identifying basic NP chunks, would not be applicable here. Therefore, any MD method for OntoNotes would rely on parsing.",
"cite_spans": [
{
"start": 133,
"end": 152,
"text": "(Hovy et al., 2006;",
"ref_id": "BIBREF8"
},
{
"start": 153,
"end": 174,
"text": "Pradhan et al., 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The OntoNotes corpus has been used for evaluating end-to-end CR systems at two CoNLL shared tasks (2011 and 2012). At the 2011 shared task, the participants relied on rule-based modules for extracting mention boundaries from parse trees. This was relatively straightforward, as the task was devoted to CR in English and most participants could use their in-house MD modules developed and refined in the past decade. At the 2012 shared task, however, the systems were expected to provide end-to-end coreference resolution for Arabic and Chinese. As it turned out, most groups could not adapt their MD rules to cover these two languages and fell back to very simple baselines (e.g., \"use all NP nodes as mentions\"). Kummerfeld et al. (2011) investigated various post-and pre-filtering heuristics for adapting their mention detection algorithm to the OntoNotes English data in a semi-automatic way, reporting mixed results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We recast MD as a node filtering task: each candidate node is classified as either mention or not. In this study, we consider all \"NP\" nodes to be candidates for MD. As Table 1 shows, this is a reasonable assumption for the OntoNotes dataset, as almost 90% of all the mentions for both Arabic and Chinese correspond to NP nodes. The remaining 11-14% of mentions can mostly be attributed to parsing errors: as we aim at end-to-end processing with no gold information available, we run our system on automatically extracted parse trees, it is therefore possible that a mention corresponds to a gold NP node that has not been labeled correctly in an automatic parse tree. Not all the NP nodes, however, correspond to a mention. Such non-mention NPs fall into several categories:",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mention extraction from parse trees",
"sec_num": "3"
},
{
"text": "\u2022 Embedded NPs. When an NP is embedded into another one, only the outer NP is used to represent a mention:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention extraction from parse trees",
"sec_num": "3"
},
{
"text": "(1) [ M EN T ION \u2212N P [ N P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention extraction from parse trees",
"sec_num": "3"
},
{
"text": "This type] of earthquake] has no precursors. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention extraction from parse trees",
"sec_num": "3"
},
{
"text": "A number of heuristics have been proposed for English to identify and discard embedded NPs, based on available head-finding algorithms, e.g., (Collins, 1999) . For other languages, however, the task of finding a head of a given NP in a constituency tree is not trivial.",
"cite_spans": [
{
"start": 142,
"end": 157,
"text": "(Collins, 1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mention extraction from parse trees",
"sec_num": "3"
},
{
"text": "\u2022 Non-referential NPs. Depending on the annotation guidelines, non-referential NPs can either be marked as mentions or not. In OntoNotes, non-referential NPs should not be annotated:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention extraction from parse trees",
"sec_num": "3"
},
{
"text": "(2) This type of earthquake has [ N P no precursors].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention extraction from parse trees",
"sec_num": "3"
},
{
"text": "\u2022 Singleton NPs. In some CR corpora (for example, ACE), mentions are annotated even if they do not participate in any coreference relations. In other corpora (MUC and OntoNotes), such singletons are not marked. When singletons are not marked, the MD tasks becomes considerably more difficult: the performance of an MD component cannot be measured and optimized directly, but only in conjunction with a coreference resolver.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention extraction from parse trees",
"sec_num": "3"
},
{
"text": "\u2022 Erroneous NPs. When we evaluate an endto-end system, we expect it to process raw input and thus rely on automatically extracted parse trees. Some NP-nodes might be incorrect, not corresponding to any NP in the gold tree. Such nodes cannot be mentions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention extraction from parse trees",
"sec_num": "3"
},
{
"text": "(3) At the meeting, Huang Xiangning read [ N P the earthquake prediction] that they had previously issued.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention extraction from parse trees",
"sec_num": "3"
},
{
"text": "\"The earthquake prediction\" is considered to be an NP node by the parser. In the gold data, however, this node does not exist at all. And even if it existed, the mention should correspond to its embedding NP node, \"the earthquake prediction that the had previously issued\" (cf. example 1 above). While this problem is less crucial for English, parsing resources for other languages are still scarce and less reliable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention extraction from parse trees",
"sec_num": "3"
},
{
"text": "We use kernel-based SVMs to classify nodes as \u00b1mentions. This requires representing a relevant fragment of a tree with a specific node marked as \"C-NP\" (candidate). We start from a straightforward representation: using automatically generated parse trees provided within the CoNLL data distribution, we generate one example for each NP node: the example corresponds to the entire parse tree with just a single node re-labeled as \"C-NP\". The assigned class label reflects the fact that this particular node corresponds to some gold mention or not. For example, the full parse tree for our sentence (1-2) will generate one positive (for \"This type of earthquake\", shown on Figure 1 ) and three negative examples (for \"This type\", \"earthquake\" and \"no precursors\"). While this representation might work for our toy example, for a longer sentence it would provide irrelevant information. Consider again the tree on Figure 1 . To generate a training example we append \"C-\" to one NP node, keeping all the remaining nodes as-is. The tree kernel operates on subtrees of the given structure, so, effectively, it will consider a lot of tree fragments that do not contain the marked node. These fragments will affect the treatment of different examples, possibly with conflicting class labels. It will not only make learning slow but also introduce spurious evidence, decreasing the system's performance. We have therefore investigated two possibilities for pruning our trees.",
"cite_spans": [],
"ref_spans": [
{
"start": 671,
"end": 679,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 911,
"end": 919,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Tree Representation",
"sec_num": "3.1"
},
{
"text": "Our first pruning algorithm (\"up-down\") starts from the node of interest (C-NP) and goes up for u nodes. From each node on the path, it considers all its children up to the depth d. The first part of Figure 2 shows a pruned tree for u = 2, d = 1 for the node \"This type of earthquake.\"",
"cite_spans": [],
"ref_spans": [
{
"start": 200,
"end": 208,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tree Representation",
"sec_num": "3.1"
},
{
"text": "Our second pruning algorithm (\"radial\") starts from the node of interest and considers all the nodes in the tree that are reachable from it via at most n edges. The second part of Figure 2 shows a pruned tree for n = 2 for the same node.",
"cite_spans": [],
"ref_spans": [
{
"start": 180,
"end": 188,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tree Representation",
"sec_num": "3.1"
},
{
"text": "In addition to (pruned) trees, we also provide vector representations of our NPs. For each NP, we extract its basic properties: number, gender, person, mention type (name, nominal or pronoun) and the number of other NPs in the document that have the same surface form. To extract mention properties, we have to compute the head. However, the goal of our study is to provide an MD algorithm that is adaptable to different languages without extensive engineering. We have therefore deliberately relied on an over-simplistic heuristic for finding an NP head: either the last or the first noun in an NP is considered a head, depending on some very basic information on a word order in a specific language. Given the head, we extract its properties from the CoNLL data in a straightforward way (for example, we have compiled a list of pronouns with their gender, number and person values from the training data and so on). This is done fully automatically and doesn't require any ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Representation",
"sec_num": "3.2"
},
{
"text": "In this section we provide evaluation results on both Arabic and Chinese. We reserve a small portion of the CoNLL training data (around 20k instances for each language) for training an MD system. Another small subset (around 5k instances) is reserved for fitting the system parameters. The evaluation results are reported on the CoNLL development data. Note that we evaluate the NP-node classifier, so the system receives no penalty for missing mentions that are not NPs. In Section 5 below, however, we will assess the impact of MD on the end-to-end CR system and thus penalize for missing non-NP mentions. As a baseline (\"all-NP\"), we consider all the NP nodes to be mentions. Table 3 below compares this baseline against mentions extracted automatically from different representations. We use Syntactic Tree Kernels (TK) implemented within the SVM-TK toolkit 2 to induce the classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 679,
"end": 686,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluating MD",
"sec_num": "4"
},
{
"text": "As our results suggest, vector representation does not provide enough information for robust mention detection. 3 Indeed, without tree kernels, the system is only able to learn a major class labeling. This highlights the importance of a model that is able to handle structured input, learning relevant patterns directly from parse trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating MD",
"sec_num": "4"
},
{
"text": "As discussed in Section 3 above, full trees contain too much misleading evidence. A single parse tree might contain several dozens of NP nodes, so, representation pruning Both pruning strategies have resulted in a substantial improvement in the performance level. The radial pruning has significantly outperformed the up-down strategy. Moreover, the radial pruning depends on just one parameter and can therefore be optimized faster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating MD",
"sec_num": "4"
},
{
"text": "Finally, joint vector and tree representation further outperforms a plain tree-based model. It must be noted, however, that our MD features (Table 2) require at least some minimal amount of languagespecific engineering.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 149,
"text": "(Table 2)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluating MD",
"sec_num": "4"
},
{
"text": "Detection into an end-to-end coreference resolution system",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating TK-based Mention",
"sec_num": "5"
},
{
"text": "For our experiments, we use BART -a modular toolkit for coreference resolution that supports state-of-the-art statistical approaches to the task and enables efficient feature engineering (Versley et al., 2008) . BART has originally been created and tested for English, but its flexible modular architecture ensures its portability to other languages and domains. In our evaluation experiments, we follow a very simple model of coreference, namely, the mention-pair approach advocated by Soon et al. (2001) and adopted in many studies ever since. We believe, however, that more complex models of coreference will also benefit from our MD algorithm: most state-of-the-art CR systems treat mention detection as a preprocessing step that is not affected by further processing and therefore we expect them to yield better performance when such a preprocessing is achieved in a more robust way.",
"cite_spans": [
{
"start": 187,
"end": 209,
"text": "(Versley et al., 2008)",
"ref_id": "BIBREF19"
},
{
"start": 487,
"end": 505,
"text": "Soon et al. (2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating TK-based Mention",
"sec_num": "5"
},
{
"text": "Creating a robust coreference resolver for a new language requires linguistic expertise and language-specific engineering. This cannot and, moreover, should not be avoided by fully language-agnostic methods. Our approach to endto-end coreference resolution relies on a universal MD component that requires no linguistic engineering -it facilitates the development of coreference resolvers in the narrow sense, by providing them with input mentions. We must stress that the resolvers themselves are not supposed to be universal: in fact, a number of linguistic studies on coreference address various language-specific challenging problems (e.g., zero pronouns, different marking of information status etc).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating TK-based Mention",
"sec_num": "5"
},
{
"text": "Below we describe the adjustments we made to BART to cover Arabic and Chinese and then report on our experiments for integrating kernelbased MD into BART to provide an end-to-end coreference resolution for these languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating TK-based Mention",
"sec_num": "5"
},
{
"text": "The modularity of the BART toolkit enables its straightforward adaptation to different languages. This includes creating meaningful linguistic representations of mentions (\"mention properties\") and, optionally, some experiments on feature selection and engineering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting BART to Arabic and Chinese",
"sec_num": "5.1"
},
{
"text": "We extracted some properties (sentence boundaries, lemmata, speaker id) for Arabic and Chinese directly from the CoNLL/OntoNotes layers 4 . Mention types are inferred from PoS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting BART to Arabic and Chinese",
"sec_num": "5.1"
},
{
"text": "We compiled lists of pronouns for both Arabic and Chinese from the training and development data. For Arabic, we used gold PoS tags to classify pronouns into subtypes, person, number and gender. For Chinese, no such information is available, so we consulted several grammar sketches and lists of pronouns on the web. Finally, we extracted a list of gender affixes for Arabic along with a list of gender-classified lemmata from the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting BART to Arabic and Chinese",
"sec_num": "5.1"
},
{
"text": "We assessed the list of features, supported by BART, discarding those that require unavailable information (for example, the aliasing feature relies on semantic types for named entities that are not available within the CoNLL/OntoNotes distribution for languages other than English). We also created two additional features: LemmataMatch (similar to string match, but uses lemmata instead of tokens) and NumberAgreementDual (similar to commonly used number agreement features, but supports dual number). Both features are expected to provide important information for coreference in Arabic, a morphologically rich language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting BART to Arabic and Chinese",
"sec_num": "5.1"
},
{
"text": "We ran a feature selection experiment to further remove irrelevant features (BART were only tested on European languages, thus several features reflected patterns more common for Germanic and Romance languages). This resulted in two feature sets, one for each language, listed in Table 4 . For comparison, we also show the baseline features (cf. below).",
"cite_spans": [],
"ref_spans": [
{
"start": 280,
"end": 287,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Adapting BART to Arabic and Chinese",
"sec_num": "5.1"
},
{
"text": "Coreference resolution systems have different tolerance for precision and recall MD errors. If a spurious mention is introduced, the CR system might still assign it to no coreference chain and thus discard from the output partition. If a correct mention is missed, however, the system has no chance of recovering it as it does not even start processing such a mention. This suggests that an MD module should be tuned to yield better recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Kernel-based MD into a Coreference Resolver",
"sec_num": "5.2"
},
{
"text": "To assess the impact of MD precision and recall errors on the performance of our coreference resolver, we run a simulation experiment. We start from the upper bound baseline: the MD module considers all the true (gold) NP mentions to be positive and all the spurious ones -to be negative. We then randomly distort this baseline, adding spurious mentions and removing correct ones, to arrive at a predefined performance level. The resulting MD output is then sent to our coreference resolution system and its performance is measured. As a measure of the CR system performance, we use the MELA F-score -an average of MUC, B 3 and CEAF e metrics, the official performance measure at the CoNLL shared task (Pradhan et al., 2012) . Figure 3 shows the results of our simulation experiment on the development data. Each line on the figure corresponds to a single MD recall level (varying from 100% to 70%). On the horizontal axis, we plot the MD precision (from 10% to 100%) and on the vertical axis -the end-to-end system MELA F-score. The curves support our intuition that reliable MD recall is crucial for coreference: when the MD recall drops to around 70%, the MELA score remains at the baseline level even for very high MD precision. It must be noted that our simulation experiment relies on an unrealistic assumption: we assume all the errors to be independent. In a more practical setting, the MELA F-score for a given combination of MD precision and recall can be higher, because the coreference system might fail to resolve the same NPs that are problematic for the MD module. Nevertheless, the curves illustrate the fact that any MD module should be strongly biased towards recall in order to be useful for coreference resolution.",
"cite_spans": [
{
"start": 702,
"end": 724,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 727,
"end": 735,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Incorporating Kernel-based MD into a Coreference Resolver",
"sec_num": "5.2"
},
{
"text": "We therefore reran our optimization experiments to fit more parameters of the MD module. Recall from Section 4 that we already used a small amount of CoNLL training data to fit our d, u and n values. We expanded the set of parameters, using the end-to-end performance (MELA F-score) to select optimal values on the same subset. Table 5 lists all the parameters of our MD module. d, u up-down pruning thresholds n radial pruning threshold j precision-recall trade-off (SVM-TK) c cost factor (SVM-TK) s size of MD vs. CR data splits r tree vs. tree+vector representation Table 5 : Parameters optimized on a held-out data Our experiments reveal that, indeed, a recalloriented version of our MD classifier yields the most reliable end-to-end resolution. Table 6 shows the MD performance of the best classifier selected according to the MELA score. While the F-scores of these biased classifiers are, obviously, much lower than their unbiased counterparts, they still manage to filter out a substantial amount of noun phrases, at the same time maintaining a very high recall level.",
"cite_spans": [],
"ref_spans": [
{
"start": 328,
"end": 336,
"text": "Table 5",
"ref_id": null
},
{
"start": 570,
"end": 577,
"text": "Table 5",
"ref_id": null
},
{
"start": 751,
"end": 758,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Incorporating Kernel-based MD into a Coreference Resolver",
"sec_num": "5.2"
},
{
"text": "Finally, Tables 7 and 8 ",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 23,
"text": "Tables 7 and 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Incorporating Kernel-based MD into a Coreference Resolver",
"sec_num": "5.2"
},
{
"text": "{M i , M j }, i < j,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Kernel-based MD into a Coreference Resolver",
"sec_num": "5.2"
},
{
"text": "where M i is a candidate antecedent and M j is a candidate anaphor kernel-based MD (TKMD), we compare its performance against two baselines. The lower bound, \"all-NP\", considers all the NP-nodes in a parse tree to be candidate mentions. The upper bound, \"gold-NP\" only considers gold NP-nodes to be candidate mentions. Note that the upper bound does not include mentions that do not correspond to NP-nodes at all (around 12% of all the mentions in the development data, cf. Table 1 above) . Tables 7 and 8 also show the performance level of BART's rule-based MD module that was developed for English. Although this heuristic has proved reliable on the English data, for example, at the CoNLL 2011 and 2012 shared tasks, it is not robust enough to be ported as-is to other languages: indeed, the performance of the heuristic MD on Arabic and Chinese is lower than the all-NP baseline. This highlights the importance of a learning-based approach: while rule-based MD shows good results for English, we cannot expect spending ten more years on designing similar systems for other languages. : Evaluating the impact of MD and linguistic knowledge: MELA F-score on the development set, significant improvement over the corresponding all-NP baseline shown with \u2020 .",
"cite_spans": [],
"ref_spans": [
{
"start": 474,
"end": 488,
"text": "Table 1 above)",
"ref_id": null
},
{
"start": 491,
"end": 505,
"text": "Tables 7 and 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Incorporating Kernel-based MD into a Coreference Resolver",
"sec_num": "5.2"
},
{
"text": "drastically when one shifts from a realistic evaluation (the \"all-NP\" baseline) to gold NP mentions. Kernel-based MD is able to recover part of this difference, providing significant improvements over the baseline (t-test on individual documents, p < 0.05).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Kernel-based MD into a Coreference Resolver",
"sec_num": "5.2"
},
{
"text": "Another important point is the difference between our basic feature set and more specific features (cf, Table 4 ). The contribution of extra features is relatively small and not significant, which is not surprising given the fact that all of them are very na\u00efve and do not address any coreference- related phenomena specific for Arabic and Chinese. However, the extra features help more when the MD improves. This suggests that a robust MD module is an essential prerequisite for further work on coreference in new languages: a more accurate set of mentions provides a better testbed for manually engineered language-specific features or constraints.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Incorporating Kernel-based MD into a Coreference Resolver",
"sec_num": "5.2"
},
{
"text": "In this paper we have investigated possibilities for language-independent mention detection based on syntactic tree kernels. We have shown that a kernel-based approach can provide a robust preprocessing system that is a vital prerequisite for fast and efficient development of end-to-end multilingual coreference resolvers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "We have evaluated different tree and vector representations, showing that the best performance is Soon et al. (2001) achieved by applying radial pruning to parse trees and augmenting the resulting representation with feature vectors, encoding very basic and shallow properties of candidate NPs.",
"cite_spans": [
{
"start": 98,
"end": 116,
"text": "Soon et al. (2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "We have investigated possibilities of incorporating our MD module to an end-to-end coreference resolution system. Our evaluation results show significant improvement over the system relying on the \"all-NP\" baseline for both Arabic and Chinese. It should be stressed that no other baseline is available without using deep linguistic expertise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "In the future, we plan to follow two directions to further improve our algorithm. First, we want to consider more global models of MD, providing joint inference over sets of NP nodes, and, possibly, incorporating CR predictions as well. Several studies (Daume III and Marcu, 2005; Denis and Baldridge, 2009) followed this direction recently, showing promising results for joint MD and CR modeling.",
"cite_spans": [
{
"start": 253,
"end": 280,
"text": "(Daume III and Marcu, 2005;",
"ref_id": "BIBREF9"
},
{
"start": 281,
"end": 307,
"text": "Denis and Baldridge, 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Second, we want to combine our learningbased MD with more traditional heuristic systems. While our approach provides a fast reliable testbed and allows CR researchers to specifically focus on coreference, rule-based MD modules have been created for a variety of languages, especially for European ones, in the past decade. We believe that by combining such systems with our kernel-based algorithm, we can build MD modules that show a high performance level and, at the same time, are more robust and portable to different domains and corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "We use English OntoNotes examples throughout this paper to illustrate discussed phenomena, as our approach is language-independent. The evaluation, however, is done on Arabic and Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://disi.unitn.it/moschitti/ Tree-Kernel.htm3 As a pilot experiment, we also added bag-of-words features to our vector representations, but this didn't yield any improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Recall that all the layers, apart from the Arabic lemma, were computed using state-of-the-art preprocessing tools by the CoNLL organizers and do not contain gold information",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research described in this paper has been partially supported by the European Community's Seventh Framework Programme (FP7/2007(FP7/ -2013 under the grant #288024: LIMOSINE -Linguistically Motivated Semantic aggregation engiNes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "NADA: A robust system for non-referential pronoun detection",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. DAARC",
"volume": "",
"issue": "",
"pages": "12--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma and David Yarowsky. 2011. NADA: A robust system for non-referential pronoun detec- tion. In Proc. DAARC, pages 12-23, Faro, Portugal, October.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Identifying nonreferential it: A machine learning approach incorporating linguistically motivated patterns",
"authors": [
{
"first": "Adriane",
"middle": [],
"last": "Boyd",
"suffix": ""
},
{
"first": "Whitney",
"middle": [],
"last": "Gegg-Harrison",
"suffix": ""
},
{
"first": "Donna",
"middle": [],
"last": "Byron",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedingd of the ACL Workshop on Feature Engineering for Machine Learning in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "40--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adriane Boyd, Whitney Gegg-Harrison, and Donna Byron. 2005. Identifying nonreferential it: A machine learning approach incorporating linguisti- cally motivated patterns. In Proceedingd of the ACL Workshop on Feature Engineering for Machine Learning in Natural Language Processing,, pages 40-47.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Convolution kernels for natural language",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Duffy",
"suffix": ""
}
],
"year": 2001,
"venue": "Advances in Neural Information Processing Systems",
"volume": "14",
"issue": "",
"pages": "625--632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Nigel Duffy. 2001. Convolution kernels for natural language. In Advances in Neural Information Processing Systems 14, pages 625-632. MIT Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. Head-Driven Statistical Mod- els for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Global joint models for coreference resolution and named entity classification",
"authors": [
{
"first": "Pascal",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2009,
"venue": "Procesamiento del Lenguaje Natural 42",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascal Denis and Jason Baldridge. 2009. Global joint models for coreference resolution and named entity classification. In Procesamiento del Lenguaje Natu- ral 42, Barcelona: SEPLN.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The automatic content extraction (ACE) program-tasks, data, and evaluation",
"authors": [
{
"first": "George",
"middle": [],
"last": "Doddington",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Przybocki",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassell",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassell, and Ralph Weischedel. 2004. The automatic content extrac- tion (ACE) program-tasks, data, and evaluation. In Proceedings of the Language Resources and Evalu- ation Conference.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Applying machine learning toward an automatic classification of it. Literary and Linguistic Computing",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Evans",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "16",
"issue": "",
"pages": "45--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Evans. 2001. Applying machine learning to- ward an automatic classification of it. Literary and Linguistic Computing, 16(1):45-57.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A statistical model for multilingual entity detection and tracking",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Abraham",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Nicolov",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Florian, Hany Hassan, Abraham Ittycheriah, Hongyan Jing, Xiaoqiang Luo, Nicolas Nicolov, and Salim Roukos. 2004. A statistical model for multi- lingual entity detection and tracking. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 1-8.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "OntoNotes: The 90% solution",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT/NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of HLT/NAACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A large-scale exploration of effective global features for a joint entity detection and tracking model",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 2005 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daume III and Daniel Marcu. 2005. A large-scale exploration of effective global features for a joint en- tity detection and tracking model. In Proceedings of the 2005 Conference on Empirical Methods in Nat- ural Language Processing.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Identifying and tracking entity mentions in a maximum entropy framework",
"authors": [
{
"first": "Abraham",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Lucian",
"middle": [
"Vlad"
],
"last": "Lita",
"suffix": ""
},
{
"first": "Nanda",
"middle": [],
"last": "Kambhatla",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Nicolov",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Margo",
"middle": [],
"last": "Stys",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abraham Ittycheriah, Lucian Vlad Lita, Nanda Kamb- hatla, Nicolas Nicolov, Salim Roukos, and Margo Stys. 2003. Identifying and tracking entity men- tions in a maximum entropy framework. In Pro- ceedings of the Conference of the North American Chapter of the Association for Computational Lin- guistics on Human Language Technology.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Making large-scale SVM learning practical",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 1999. Making large-scale SVM learning practical. In B. Sch\u00f6lkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT-Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Discourse referents",
"authors": [
{
"first": "Lauri",
"middle": [],
"last": "Karttunen",
"suffix": ""
}
],
"year": 1976,
"venue": "Sytax and Semantics",
"volume": "7",
"issue": "",
"pages": "361--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauri Karttunen. 1976. Discourse referents. In J. McKawley, editor, Sytax and Semantics, vol- ume 7, pages 361-385. Academic Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Mention detection: Heuristics for the OntoNotes annotations",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Jonathan K Kummerfeld",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Burkett",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "102--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan K Kummerfeld, Mohit Bansal, David Burkett, and Dan Klein. 2011. Mention detection: Heuris- tics for the OntoNotes annotations. In Proceedings of the Fifteenth Conference on Computational Nat- ural Language Learning: Shared Task, pages 102- 106, Portland, Oregon, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Efficient convolution kernels for dependency and constituent syntactic trees",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of European Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "318--329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti. 2006. Efficient convolution ker- nels for dependency and constituent syntactic trees. In Proceedings of European Conference on Machine Learning, pages 318-329.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Kernel methods, syntax and semantics for relational text categorization",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceeding of the International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti. 2008. Kernel methods, syntax and semantics for relational text categorization. In Proceeding of the International Conference on In- formation and Knowledge Management, NY, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unsupervised models for coreference resolution",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "640--649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng. 2008. Unsupervised models for corefer- ence resolution. In Proceedings of the 2008 Con- ference on Empirical Methods in Natural Language Processing, pages 640-649.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Sixteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unre- stricted coreference in OntoNotes. In Proceedings of the Sixteenth Conference on Computational Natu- ral Language Learning (CoNLL 2012), Jeju, Korea.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [],
"year": 2001,
"venue": "Computational Linguistic",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning ap- proach to coreference resolution of noun phrases. Computational Linguistic, 27(4):521-544.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "BART: a modular toolkit for coreference resolution",
"authors": [
{
"first": "Yannick",
"middle": [],
"last": "Versley",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Eidelman",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Jern",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Xiaofeng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies",
"volume": "",
"issue": "",
"pages": "9--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yannick Versley, Simone Paolo Ponzetto, Massimo Poesio, Vladimir Eidelman, Alan Jern, Jason Smith, Xiaofeng Yang, and Alessandro Moschitti. 2008. BART: a modular toolkit for coreference resolution. In Proceedings of the 46th Annual Meeting of the As- sociation for Computational Linguistics on Human Language Technologies, pages 9-12.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Mention detection crossing the language barrier",
"authors": [
{
"first": "Imed",
"middle": [],
"last": "Zitouni",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imed Zitouni and Radu Florian. 2008. Mention detec- tion crossing the language barrier. In Proceedings of the 2008 Conference on Empirical Methods in Nat- ural Language Processing.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Parse tree for \"This type of earthquake\", examples (1-2): before pruning. Up-down (left) vs. radial (right) pruning for \"This type of earthquake,\" examples(1-2)language-specific manual engineering.",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Performance of an end-to-end coreference resolution system for different values of MD Recall and Precision in a simulation experiment: MELA F-score on the Arabic and Chinese development data.",
"num": null,
"uris": null
},
"TABREF1": {
"num": null,
"html": null,
"text": "lists the features for our vector representation. Nominal values are binarized, leading to 10 binary or continuous features.",
"content": "<table><tr><td>feature</td><td>possible values</td></tr><tr><td>Gender</td><td>F,M,Unknown</td></tr><tr><td>Definiteness</td><td>Yes, No</td></tr><tr><td>Number</td><td>Sg,Pl,Du,Unknown</td></tr><tr><td>MentionType</td><td>Name,Nominal,Pronoun</td></tr><tr><td colspan=\"2\">#same-surface NPs continuous (normalized)</td></tr><tr><td>in the doc</td><td/></tr></table>",
"type_str": "table"
},
"TABREF2": {
"num": null,
"html": null,
"text": "",
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"num": null,
"html": null,
"text": "",
"content": "<table/>",
"type_str": "table"
},
"TABREF6": {
"num": null,
"html": null,
"text": "Features used for Coreference Resolution in Arabic and Chinese: each feature describes a pair of mentions",
"content": "<table/>",
"type_str": "table"
},
"TABREF8": {
"num": null,
"html": null,
"text": "",
"content": "<table/>",
"type_str": "table"
},
"TABREF10": {
"num": null,
"html": null,
"text": "",
"content": "<table><tr><td colspan=\"3\">features features</td></tr><tr><td>Arabic</td><td/><td/></tr><tr><td>all-NP</td><td>46.79</td><td>47.36</td></tr><tr><td>English MD</td><td>43.77</td><td>43.65</td></tr><tr><td>TK-MD</td><td>48.38 \u2020</td><td>51.54 \u2020</td></tr><tr><td>gold-NP</td><td>63.07 \u2020</td><td>65.57 \u2020</td></tr><tr><td>Chinese</td><td/><td/></tr><tr><td>all-NP</td><td>53.26</td><td>53.24</td></tr><tr><td>English MD</td><td>48.99</td><td>48.99</td></tr><tr><td>TK-MD</td><td>58.11 \u2020</td><td>58.15 \u2020</td></tr><tr><td>gold-NP</td><td>59.97 \u2020</td><td>60.04 \u2020</td></tr></table>",
"type_str": "table"
},
"TABREF11": {
"num": null,
"html": null,
"text": "Evaluating the impact of MD and linguistic knowledge: MELA F-score on the official CoNLL-2012 test set, significant improvement over the corresponding all-NP baseline shown with",
"content": "<table/>",
"type_str": "table"
}
}
}
}