ACL-OCL / Base_JSON /prefixC /json /crac /2021.crac-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:21:17.856760Z"
},
"title": "A Brief Survey and Comparative Study of Recent Development of Pronoun Coreference Resolution in English",
"authors": [
{
"first": "Hongming",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HKUST",
"location": {}
},
"email": "hzhangal@cse.ust.hk"
},
{
"first": "Xinran",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HKUST",
"location": {}
},
"email": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HKUST",
"location": {}
},
"email": "yqsong@cse.ust.hk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Pronoun Coreference Resolution (PCR) is the task of resolving pronominal expressions to all mentions they refer to. Compared with the general coreference resolution task, the main challenge of PCR is the coreference relation prediction rather than the mention detection. As one important natural language understanding (NLU) component, pronoun resolution is crucial for many downstream tasks and still challenging for existing models, which motivates us to survey existing approaches and think about how to do better. In this survey, we first introduce representative datasets and models for the ordinary pronoun coreference resolution task. Then we focus on recent progress on hard pronoun coreference resolution problems (e.g., Winograd Schema Challenge) to analyze how well current models can understand commonsense. We conduct extensive experiments to show that even though current models are achieving good performance on the standard evaluation set, they are still not ready to be used in real applications (e.g., all SOTA models struggle on correctly resolving pronouns to infrequent objects). All experiment codes will be available upon acceptance.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Pronoun Coreference Resolution (PCR) is the task of resolving pronominal expressions to all mentions they refer to. Compared with the general coreference resolution task, the main challenge of PCR is the coreference relation prediction rather than the mention detection. As one important natural language understanding (NLU) component, pronoun resolution is crucial for many downstream tasks and still challenging for existing models, which motivates us to survey existing approaches and think about how to do better. In this survey, we first introduce representative datasets and models for the ordinary pronoun coreference resolution task. Then we focus on recent progress on hard pronoun coreference resolution problems (e.g., Winograd Schema Challenge) to analyze how well current models can understand commonsense. We conduct extensive experiments to show that even though current models are achieving good performance on the standard evaluation set, they are still not ready to be used in real applications (e.g., all SOTA models struggle on correctly resolving pronouns to infrequent objects). All experiment codes will be available upon acceptance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The question of how human beings resolve pronouns 1 has long been of interest to both linguistic and natural language processing (NLP) communities, for the reason that a pronoun itself only having weak semantic meaning brings challenges to natural language understanding. To explore solutions for that question, pronoun coreference resolution (PCR) (Hobbs, 1978) was proposed. 2 As a challenging yet vital natural language understanding 1 Some pronouns may refer to non-nominal antecedents. For example, the pronoun \"it\" in \"It is too cold in the Winter here\" does not refer to any real object (Kolhatkar et al., 2018) . But in this survey, we only focus on pronouns that refer to nominal antecedents.",
"cite_spans": [
{
"start": 349,
"end": 362,
"text": "(Hobbs, 1978)",
"ref_id": "BIBREF14"
},
{
"start": 377,
"end": 378,
"text": "2",
"ref_id": null
},
{
"start": 594,
"end": 618,
"text": "(Kolhatkar et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 PCR is also known as anaphora resolution (Versley et al., 2016) . Previous studies (Ng, 2005; Zhang et al., 2019c) task, pronoun coreference resolution is to find the correct reference for a given pronominal anaphor in the context and has been shown to be useful for a series of downstream tasks, such as machine translation (Mitkov et al., 1995; Lapshinova-Koltunski et al., 2018) , summarization (Steinberger et al., 2007) , and dialog systems (Strube and M\u00fcller, 2003) . To investigate the difference between PCR and the general coreference resolution task, which tries to identify not only the coreference relations between noun phrases (NP) and pronouns (P) but also potential coreference relations between noun phrases or coreference relations between pronouns, we conduct experiments with one recent breakthrough model (i.e., End-to-end model (Lee et al., 2017) ) on the CoNLL-2012 shard task (Pradhan et al., 2012) under two settings: one without the gold mention and one with the gold mention. In the 'without gold mention' setting, models are required to first identify spans from the documents as the mentions and then predict the coreference relations among these mentions. As a comparison, if gold focus on three kinds of pronouns: third personal pronoun (e.g., she, her, he, him, them, they, it), possessive pronoun (e.g., his, hers, its, their, theirs), and demonstrative pronoun (e.g., this, that, these, those). The first and second personal pronouns are typically not considered as they often refer to the current speakers, which are normally out of the conversation or document. Besides that, conventional PCR works (Ng, 2005; Zhang et al., 2019b,c) mostly focusing on identifying coreference relations between pronouns and noun phrases rather than coreference relation between pronouns. mentions are provided, models only need to predict the coreference relations (i.e., the task of distinguishing between referential and non-referential instances is ignored). From the results in Table 1 , we can see that, without the gold mention, the model performs well on P-P coreference relations, while not as well on the other two kinds of relations. However, if gold mentions are provided, the model can achieve very good performance on the NP-NP coreference relations. Compared with other kinds of coreference relations, no matter whether the gold mention is provided or not, resolving pronouns to noun phrases is always the most challenging one.",
"cite_spans": [
{
"start": 43,
"end": 65,
"text": "(Versley et al., 2016)",
"ref_id": "BIBREF44"
},
{
"start": 85,
"end": 95,
"text": "(Ng, 2005;",
"ref_id": "BIBREF29"
},
{
"start": 96,
"end": 116,
"text": "Zhang et al., 2019c)",
"ref_id": "BIBREF51"
},
{
"start": 327,
"end": 348,
"text": "(Mitkov et al., 1995;",
"ref_id": "BIBREF28"
},
{
"start": 349,
"end": 383,
"text": "Lapshinova-Koltunski et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 400,
"end": 426,
"text": "(Steinberger et al., 2007)",
"ref_id": "BIBREF40"
},
{
"start": 448,
"end": 473,
"text": "(Strube and M\u00fcller, 2003)",
"ref_id": "BIBREF41"
},
{
"start": 852,
"end": 870,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 902,
"end": 924,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF33"
},
{
"start": 1637,
"end": 1647,
"text": "(Ng, 2005;",
"ref_id": "BIBREF29"
},
{
"start": 1648,
"end": 1670,
"text": "Zhang et al., 2019b,c)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 2003,
"end": 2010,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The correct resolution of pronouns typically requires reasoning over both linguistic knowledge (e.g., 'they' typically refers to plural objects 3 ) and commonsense knowledge (e.g., in sentence \"The fish ate the worm, it was hungry.\", 'it' refers to 'fish' because hungry things tend to eat rather than be eaten.). Considering that the ordinary PCR task evaluates the inference over both types of knowledge at the same time, the performance on ordinary PCR tasks cannot clearly reflect models' performance regarding different knowledge types. To address this problem, the Winograd Schema Challenge (WSC) (Levesque et al., 2012) task is proposed. The influence of all commonly used linguistic knowledge is avoided during the creation of WSC such that WSC can be used to reflect how current PCR models can understand commonsense knowledge. In Section 2 and 3, we introduce the progress and remaining challenges on the ordinary PCR and WSC tasks respectively. After that, we introduce other PCR tasks that are developed for different research purposes in Section 4. In the end, we conclude this survey with Section 5. The contribution of this survey is three-fold: (1) we broadly introduce available PCR tasks, datasets, and models; (2) We summarize the main contribution of recent models; 3We conduct experiments to analyze the limitations of current models, which can help the community think about how to better solve PCR in the future.",
"cite_spans": [
{
"start": 603,
"end": 626,
"text": "(Levesque et al., 2012)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Ordinary pronoun coreference resolution tasks are often defined over formal textual corpus (e.g., news-paper) and the annotation is usually conducted by domain experts or linguists. The PCR task can be formally defined as follows. Given a text D, which contains a pronoun p, the goal is to identify all the mentions that p refers to. We denote the correct mentions p refers to as c \u2208 C, where C is the correct mention set. Similarly, each candidate span is denoted as s \u2208 S, where S is the set of all candidate spans. Note that in the case where no golden mentions are provided, all possible spans in D are used to form S. The task is thus to identify C out of S. In the rest of this section, we introduce the widely used datasets as well as the progress and limitation of current approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ordinary PCR",
"sec_num": "2"
},
{
"text": "Throughout the years, researchers in the NLP community have devoted great efforts to developing high-quality coreference resolution datasets 4 and we introduce representative ones as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "1. MUC: MUC-6 (Grishman and Sundheim, 1996) and MUC-7 (Chinchor, 1998) , which were developed for the 6 th and 7 th message understanding conferences respectively, are the earliest coreference resolution datasets. They are focusing on English news articles and are relatively small compared with modern datasets.",
"cite_spans": [
{
"start": 14,
"end": 43,
"text": "(Grishman and Sundheim, 1996)",
"ref_id": "BIBREF11"
},
{
"start": 48,
"end": 70,
"text": "MUC-7 (Chinchor, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "The ACE dataset (Doddington et al., 2004) ",
"cite_spans": [
{
"start": 16,
"end": 41,
"text": "(Doddington et al., 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ACE:",
"sec_num": "2."
},
{
"text": "In this subsection, we introduce representative models for the ordinary PCR task. We first briefly introduce conventional approaches that rely on human-designed rules or features and then introduce the end-to-end model, which is a groundbreaking model for solving coreference resolution tasks. After that, we briefly introduce a few recent improvements over the end-to-end model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2.2"
},
{
"text": "Before the deep learning era, human-designed rules (Hobbs, 1978; Raghunathan et al., 2010) , knowledge (Ponzetto and Strube, 2006; Versley et al., 2016) , and features (Ng, 2005; Wiseman et al., 2016) dominated the general coreference resolution and PCR tasks. Some rules and features are crucial for correctly resolving pronouns (Lee et al., 2013) . For example, 'he' typically refers to males and 'she' typically refers to females; 'it' typically refers to singular objects and 'them' typically refers to plural objects. The performances of these methods heavily rely on the coverage and quality of the manually defined rules and features. Based on these designed features (Bengtson and Roth, 2008) , a few more advanced machine learning models were applied to the coreference resolution task. For example, instead of identifying coreference relation pair-wisely, (Clark and Manning, 2015) proposes an entity-centric coreference system that can learn an effective policy for building coreference chains incrementally. Besides that, a novel model was also proposed to predict coreference relations with a deep reinforcement learning framework (Clark and Manning, 2016) . Moreover, heuristic rules based on linguistic knowledge can also be incorporated into constraints for machine learning models .",
"cite_spans": [
{
"start": 51,
"end": 64,
"text": "(Hobbs, 1978;",
"ref_id": "BIBREF14"
},
{
"start": 65,
"end": 90,
"text": "Raghunathan et al., 2010)",
"ref_id": "BIBREF36"
},
{
"start": 117,
"end": 130,
"text": "Strube, 2006;",
"ref_id": "BIBREF32"
},
{
"start": 131,
"end": 152,
"text": "Versley et al., 2016)",
"ref_id": "BIBREF44"
},
{
"start": 168,
"end": 178,
"text": "(Ng, 2005;",
"ref_id": "BIBREF29"
},
{
"start": 179,
"end": 200,
"text": "Wiseman et al., 2016)",
"ref_id": "BIBREF45"
},
{
"start": 330,
"end": 348,
"text": "(Lee et al., 2013)",
"ref_id": "BIBREF21"
},
{
"start": 675,
"end": 700,
"text": "(Bengtson and Roth, 2008)",
"ref_id": "BIBREF0"
},
{
"start": 866,
"end": 891,
"text": "(Clark and Manning, 2015)",
"ref_id": "BIBREF4"
},
{
"start": 1144,
"end": 1169,
"text": "(Clark and Manning, 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rule and Feature Based Methods",
"sec_num": "2.2.1"
},
{
"text": "Leveraging human-designed rules or features can help accurately resolve some pronouns, but it is hard to manually design rules to cover all cases. To solve this problem, an end-to-end deep model (Lee et al., 2017) was proposed. Different from other machine learning-based methods, it does not use any human-defined rules, yet achieves surprisingly good performance. Specifically, the end-toend model first leverages the combination of Bidirectional LSTM and inner-attention modules to encode local context and generate representations for all potential mentions. After that, a standard feed-forward neural network is used to predict the coreference relations. Experiment results show that the proposed model is simple yet effective. Its success proves that current deep models are capable of capturing rich contextual information, which is crucial for resolving coreference relations.",
"cite_spans": [
{
"start": 195,
"end": 213,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-end Model",
"sec_num": "2.2.2"
},
{
"text": "Recently, on top of the end-to-end model, a few improved works were proposed to address different limitations of the original end-to-end model 5 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further Improvements",
"sec_num": "2.2.3"
},
{
"text": "1. Higher-order Information: One limitation of the original end-to-end model is that all predictions are based on pairs, which is not sufficient for capturing higher-order coreference relations. To fix this issue, a differentiable approximation module was proposed in to provide the higher-order coreference resolution inference ability (i.e., leveraging the coreference cluster to better predict the coreference relations). Moreover, this work first incorporates ELMo (Peters et al., 2018) , a kind of deep contextualized word representations, as part of the word representation, which is proven very effective. (Clark and Manning, 2015) 25.8 62.1 36.5 28.9 64.9 40.0 9.8 6.3 7.6 25.4 59.3 36.5 Deep-RL (Clark and Manning, 2016) 78.6 63.9 70.5 73.3 68.9 71.0 3.7 2.9 5.5 76.4 61.2 68.0",
"cite_spans": [
{
"start": 469,
"end": 490,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF30"
},
{
"start": 613,
"end": 638,
"text": "(Clark and Manning, 2015)",
"ref_id": "BIBREF4"
},
{
"start": 704,
"end": 729,
"text": "(Clark and Manning, 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Further Improvements",
"sec_num": "2.2.3"
},
{
"text": "End-to-end (Lee et al., 2017) 70.7 77.8 74.1 75.6 74.0 74.8 37.8 71.7 49.5 68.3 76.4 72.1 + KG (Zhang et al., 2019c) 80.0 75.6 77.7 81.7 72.2 76.7 50.8 64.6 56.9 77.9 74.0 75.9 + SpanBERT (Joshi et al., 2020) 82.4 80.5 81.5 83.9 81.0 82.4 52.0 61.5 56.4 82.2 80.2 81.2 ios. To solve this problem, two works (Zhang et al., 2019b,c) were proposed to inject external structured knowledge into the end-to-end model. Among these two, (Zhang et al., 2019b) requires converting external knowledge into features while (Zhang et al., 2019c) directly uses external knowledge in the format of triplets.",
"cite_spans": [
{
"start": 11,
"end": 29,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 95,
"end": 116,
"text": "(Zhang et al., 2019c)",
"ref_id": "BIBREF51"
},
{
"start": 188,
"end": 208,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 307,
"end": 330,
"text": "(Zhang et al., 2019b,c)",
"ref_id": null
},
{
"start": 429,
"end": 450,
"text": "(Zhang et al., 2019b)",
"ref_id": "BIBREF50"
},
{
"start": 510,
"end": 531,
"text": "(Zhang et al., 2019c)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Further Improvements",
"sec_num": "2.2.3"
},
{
"text": "Recently, along with the fast development of language representation models, a few works (Kantor and Globerson, 2019; Joshi et al., 2020) have been trying to replace the encoding layer of the original end-to-end model with more powerful language representation models. Span-BERT (Joshi et al., 2020) replaces ELMo with SpanBERT and boosts the performance by 6.6 F1 over the general coreference resolution task.",
"cite_spans": [
{
"start": 279,
"end": 299,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stronger Language Representation Models:",
"sec_num": "3."
},
{
"text": "We follow the experimental setting of (Zhang et al., 2019c) and test the performance 6 of representative models (Raghunathan et al., 2010; Manning, 2015, 2016; Lee et al., 2017; Zhang et al., 6 We use the released codes of different models along with their default hyper-parameters to finish the experiments. For the end2end model, we also include ELMo (Peters et al., 2018) as part of the representation and achieve better performance than the original one in Table 1. 2019c; Joshi et al., 2020) on the CoNLL-2012 dataset (Pradhan et al., 2012) . The experiment setting (both detection the mentions and resolving the coreference relations) and evaluation metric are the same as these previous works on CoNLL-2012. From the results in Table 2 , we can observe that with the help of the end-to-end model and further modifications, the community has made great progress on the standard evaluation set. For example, the end-to-end model achieves an F1 score over 70 and adding external knowledge (either in a structured way or a representation way) further boost the performance. Among all pronoun types, all models perform better on third personal and possessive pronouns, and relatively poorly on demonstrative ones. This is mainly because of the imbalanced distribution of the dataset (i.e., third personal and possessive pronouns appear much more than demonstrative ones).",
"cite_spans": [
{
"start": 38,
"end": 59,
"text": "(Zhang et al., 2019c)",
"ref_id": "BIBREF51"
},
{
"start": 112,
"end": 138,
"text": "(Raghunathan et al., 2010;",
"ref_id": "BIBREF36"
},
{
"start": 139,
"end": 159,
"text": "Manning, 2015, 2016;",
"ref_id": null
},
{
"start": 160,
"end": 177,
"text": "Lee et al., 2017;",
"ref_id": "BIBREF22"
},
{
"start": 178,
"end": 193,
"text": "Zhang et al., 6",
"ref_id": null
},
{
"start": 353,
"end": 374,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF30"
},
{
"start": 523,
"end": 545,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 461,
"end": 469,
"text": "Table 1.",
"ref_id": "TABREF1"
},
{
"start": 735,
"end": 742,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Performances and Analysis",
"sec_num": "2.3"
},
{
"text": "To investigate whether current PCR models are good enough to be used in real applications, which can be out of the training domain, we conduct experiments on the cross-domain setting. In detail, we select two different PCR datasets from different domains (i.e., CoNLL (Pradhan et al., 2012) from news and i2b2 (Uzuner et al., 2012) from the medical domain) and try to train the model on one dataset and test it on the other. We conduct experiments with three best-performing models and show the results in Table 3 , from which we can see that all models 7 perform significantly worse if they 7 SpanBERT performs poorly on i2b2 when it is not trained on it. The reason can be that the medical corpus is too different from the pre-trained corpus of SpanBERT and we use the default hyper-parameters, which might not be the best ones. Since the main contribution of SpanBERT is helping models to identify the mention spans, in our setting focusing on reference detection, such improvement is not necessary are used across domains (i.e., when the domains of training and test data are different). Compared with the baseline method, adding explicit knowledge can help achieve slightly better performance in the cross-domain setting because its training objective allows models to learn to selectively use suitable knowledge rather than just fitting the training data.",
"cite_spans": [
{
"start": 268,
"end": 290,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF33"
},
{
"start": 310,
"end": 331,
"text": "(Uzuner et al., 2012)",
"ref_id": "BIBREF43"
},
{
"start": 592,
"end": 593,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 506,
"end": 513,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Cross-domain Performance",
"sec_num": "2.3.1"
},
{
"text": "To further analyze the performance of existing models, we split the pronouns based on the frequency of the objects they refer to. If an object appears more than ten times in the whole dataset, we denote it as a frequent object. Otherwise, we denote it as an infrequent object. As a result, we collect 1,095 frequent and 470,232 infrequent objects, whose average frequencies are 36.2 and 1.46 respectively. We report the performance of best-performing models on infrequent and frequent objects separately in Table 4 . In general, all models perform better on frequent objects because they appear more in the training data. Another interesting observation is that, even though adding external KG and a stronger language representation model can both boost the performance, their improvements come from different types of objects. For example, the main contribution of adding KG is on infrequent objects because even though they are less frequent in the training data, they can still be covered by some external knowledge. As a comparison, using a strong language representation model mainly benefits the frequent objects because it has a stronger ability to fit the training data. This observation is consistent with our previous observations that adding external KG has more effect on those relatively rare pronouns (i.e., demonstrative pronouns).",
"cite_spans": [],
"ref_spans": [
{
"start": 507,
"end": 514,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Influence of Frequency",
"sec_num": "2.3.2"
},
{
"text": "As aforementioned, the correct resolution of pronouns requires the inference over both linguistic knowledge and commonsense knowledge. To clearly reflect how models can resolve pronouns that require the inference over commonsense knowledge, the hard PCR task was proposed. As Winograd Schema Challenge (WSC) is one of the most popular hard PCR tasks, we use the task definition in WSC to define the hard PCR task. For each question q, a sentence s is given, which contains a pronoun p and two candidates n 1 , n 2 . The task is to find out which of the candidates p refers to. Different from the ordinary PCR task, the influence of all commonly observed features (e.g., gender or plurality) are removed via careful expert design. In WSC, all questions are paired up such that questions in each pair have only minor differences (mostly one-word difference), but the answers are reversed. One pair of the WSC instances is shown in Figure 1 . Solving these questions typically requires the support of complex commonsense knowledge. For example, human beings can know that the pronoun 'it' in the first sentence refers to 'fish' while the one in the second sentence refers to 'worm' because 'hungry' is a common property of something eating while 'tasty' is a common property of something being eaten. Without the support of such commonsense knowledge, answering these questions becomes challenging because both the fish and worm can be hungry or tasty by themselves.",
"cite_spans": [],
"ref_spans": [
{
"start": 929,
"end": 937,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Hard PCR",
"sec_num": "3"
},
{
"text": "We introduce datasets as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "1. Winograd Schema Challenge: Among all the hard pronoun coreference resolution tasks, WSC is among the most popular ones. In total, WSC has 273 questions 8 . Its small size determines that it cannot be used to train a good supervised model and can only be used as the evaluation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Another hard pronoun coreference resolution dataset is the definite pronoun resolution dataset (DPR) 9 (Rahman and Ng, 2012) . Different from WSC, DPR leveraged undergraduates rather than experts to create the dataset. In total, DPR collected 1,886 questions, which is a slightly larger scale than the official WSC. However, as DPR can not guarantee that all DPR questions follow the strict design guideline of WSC, questions in DPR are relatively simpler.",
"cite_spans": [
{
"start": 103,
"end": 124,
"text": "(Rahman and Ng, 2012)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definite Pronoun Resolution:",
"sec_num": "2."
},
{
"text": "and DPR is their small scales. To create a larger scale dataset, WinoGrande (Sakaguchi et al., 2020) was proposed. By leveraging annotators from Amazon Mechanical Turk, Wino-Grande collected 53 thousand WSC-like questions. Moreover, to make sure of the dataset quality, WinoGrande applied a bias reduction algorithm to filter out examples that may contain annotation bias. Experimental results prove that WinoGrande is much more challenging than the original WSC because the SOTA models on WSC only achieve 51% accuracy on Wino-Grande, which is similar to the random guess.",
"cite_spans": [
{
"start": 76,
"end": 100,
"text": "(Sakaguchi et al., 2020)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WinoGrande: One common problem of WSC",
"sec_num": "3."
},
{
"text": "KnowRef (Emami et al., 2019) , similar to WinoGrande, also aimed at creating a larger scale WSC dataset but with a different approach. Instead of using crowd-sourcing + adversarial filtering framework, KnowRef tried to extract WSC-like questions from raw sentences. As a result, KnowRef collected eight thousand WSC-like questions.",
"cite_spans": [
{
"start": 8,
"end": 28,
"text": "(Emami et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "KnowRef:",
"sec_num": "4."
},
{
"text": "In this subsection, we introduce existing approaches for the hard PCR task. As the majority of the methods are evaluated based on WSC, all the discussion and analysis are based on their performance on WSC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.2"
},
{
"text": "At first, people tried to leverage different commonsense knowledge resources to solve WSC questions in an explainable way. For example, Liu et al. (2016) first leveraged the commonsense triplets from ConceptNet (Liu and Singh, 2004) to train the word embeddings and then applied the embeddings to solve the WSC task. Knowledge hunter (Emami et al., 2018) proposed to leverage search engines (e.g., Google) to acquire needed commonsense knowledge. It first searched WSC questions in search engines and then used the returned searching results to solve WSC questions. SP-10K (Zhang et al., 2019a) conducted experiments to show that selectional preference (SP) knowledge such as human beings are more likely to eat 'food' rather than 'rock' can also be helpful for solving WSC questions. Last but not least, ASER (Zhang et al., 2020) tried to use knowledge about eventualities (e.g., 'being hungry' can cause 'eat food') to solve WSC questions. In general, structured commonsense knowledge can help solve one-third of the WSC questions, but their overall performance is limited due to their low coverage. There are mainly two reasons: (1) coverage of existing commonsense resources are not large enough;",
"cite_spans": [
{
"start": 136,
"end": 153,
"text": "Liu et al. (2016)",
"ref_id": "BIBREF26"
},
{
"start": 211,
"end": 232,
"text": "(Liu and Singh, 2004)",
"ref_id": "BIBREF25"
},
{
"start": 334,
"end": 354,
"text": "(Emami et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 573,
"end": 594,
"text": "(Zhang et al., 2019a)",
"ref_id": "BIBREF48"
},
{
"start": 810,
"end": 830,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reasoning with Structured Knowledge",
"sec_num": "3.2.1"
},
{
"text": "(2) lack of a principled way to use structured knowledge for NLP tasks. Current methods (Emami et al., 2018; Zhang et al., 2019a Zhang et al., , 2020 mostly rely on string match. However, for many WSC questions, it is hard to find supportive knowledge in such way.",
"cite_spans": [
{
"start": 88,
"end": 108,
"text": "(Emami et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 109,
"end": 128,
"text": "Zhang et al., 2019a",
"ref_id": "BIBREF48"
},
{
"start": 129,
"end": 149,
"text": "Zhang et al., , 2020",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reasoning with Structured Knowledge",
"sec_num": "3.2.1"
},
{
"text": "Another approach is leveraging language models to solve WSC questions (Trinh and Le, 2018) , where each WSC question is first converted into two sentences by replacing the target pronoun with the two candidates respectively and then the language models can be employed to compute the probability of both sentences. The sentence with a higher probability will be selected as the final prediction. As this method does not require any string match, it can make prediction for all WSC questions and achieve better overall performance. Recently, a more advanced transformer-based language model GPT-2 (Radford et al., 2019) achieved better performance due to its stronger language representation ability. The success of language models demonstrates that rich commonsense knowledge can be indeed encoded within language models implicitly. Another interesting finding about these language model based approaches is that they proposed two settings to predict the probability: (1) Full: use the probability of the whole sentence as the final prediction;",
"cite_spans": [
{
"start": 70,
"end": 90,
"text": "(Trinh and Le, 2018)",
"ref_id": "BIBREF42"
},
{
"start": 596,
"end": 618,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Representation Models",
"sec_num": "3.2.2"
},
{
"text": "(2) Partial: only consider the probability of the partial sentence after the target pronoun. Experiments show that the partial model always outperforms the full model. One explanation is that the influence of the imbalanced distribution of candidate words is relieved by only considering the sentence probability after them. Such observation also explains why GPT-2 can outperform unsuper- (Emami et al., 2018) 119 79 75 60.1% 57.3% SP (Human) (Zhang et al., 2019a) 15 0 258 100% 52.7% SP (PP) (Zhang et al., 2019a) 50 26 197 65.8% 54.4% ASER (String Match) (Zhang et al., 2020) 63 27 183 70.0% 56.6% LM (Single) (Trinh and Le, 2018) 149 124 0 54.5% 54.5% LM (Ensemble) (Trinh and Le, 2018) 168 105 0 61.5% 61.5% GPT-2 (Radford et al., 2019) 193 80 0 70.7% 70.7% Finetuning BERT (Devlin et al., 2019) +ASER (Zhang et al., 2020) 177 96 0 64.5% 64.5% BERT (Devlin et al., 2019) +DPR (Rahman and Ng, 2012) 195 78 0 71.4% 71.4% BERT (Devlin et al., 2019) +WinoGrande (Sakaguchi et al., 2020) 210 63 0 76.9% 76.9% RoBERTa (Liu et al., 2019) +DRP (Rahman and Ng, 2012) 227 46 0 83.1% 83.1% RoBERTa (Liu et al., 2019) +WinoGrande (Sakaguchi et al., 2020) 246 27 0 90.1% 90.1%",
"cite_spans": [
{
"start": 390,
"end": 410,
"text": "(Emami et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 444,
"end": 465,
"text": "(Zhang et al., 2019a)",
"ref_id": "BIBREF48"
},
{
"start": 494,
"end": 515,
"text": "(Zhang et al., 2019a)",
"ref_id": "BIBREF48"
},
{
"start": 558,
"end": 578,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF49"
},
{
"start": 613,
"end": 633,
"text": "(Trinh and Le, 2018)",
"ref_id": "BIBREF42"
},
{
"start": 670,
"end": 690,
"text": "(Trinh and Le, 2018)",
"ref_id": "BIBREF42"
},
{
"start": 719,
"end": 741,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF35"
},
{
"start": 779,
"end": 800,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 807,
"end": 827,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF49"
},
{
"start": 854,
"end": 875,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 881,
"end": 902,
"text": "(Rahman and Ng, 2012)",
"ref_id": "BIBREF37"
},
{
"start": 929,
"end": 950,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 963,
"end": 987,
"text": "(Sakaguchi et al., 2020)",
"ref_id": "BIBREF39"
},
{
"start": 1017,
"end": 1035,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 1041,
"end": 1062,
"text": "(Rahman and Ng, 2012)",
"ref_id": "BIBREF37"
},
{
"start": 1092,
"end": 1110,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 1123,
"end": 1147,
"text": "(Sakaguchi et al., 2020)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Representation Models",
"sec_num": "3.2.2"
},
{
"text": "Original (Levesque et al., 2012) 252 21 0 92.1% 92.1% Recent (Sakaguchi et al., 2020) 264 9 0 96.5% 96.5% vised BERT on WSC because models based on BERT, which relies on predicting the probability of candidate words, cannot get rid of such noise.",
"cite_spans": [
{
"start": 9,
"end": 32,
"text": "(Levesque et al., 2012)",
"ref_id": "BIBREF24"
},
{
"start": 61,
"end": 85,
"text": "(Sakaguchi et al., 2020)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Beings",
"sec_num": null
},
{
"text": "Last but not least, we introduce current bestperforming models on the WSC task, which finetunes pre-trained language representation models (e.g., BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019) ) with a similar dataset (e.g., DPR (Rahman and Ng, 2012) or WinoGrande (Sakaguchi et al., 2020) ). CorefBERT (Ye et al., 2020) , in addition, introduced a new pre-training task that requires models to predict mention references. This idea of fine-tuning was originally proposed by (Kocijan et al., 2019) , which first converts the original WSC task into a token prediction task and then selects the candidate with higher probability as the final prediction. In general, the stronger the language models and the larger the fine-tuning datasets are, the better the model can perform on the WSC task.",
"cite_spans": [
{
"start": 151,
"end": 172,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 184,
"end": 202,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 251,
"end": 260,
"text": "Ng, 2012)",
"ref_id": "BIBREF37"
},
{
"start": 275,
"end": 299,
"text": "(Sakaguchi et al., 2020)",
"ref_id": "BIBREF39"
},
{
"start": 313,
"end": 330,
"text": "(Ye et al., 2020)",
"ref_id": "BIBREF46"
},
{
"start": 485,
"end": 507,
"text": "(Kocijan et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning Representation Models",
"sec_num": "3.2.3"
},
{
"text": "To clearly understand the progress we have made on solving hard PCR problems, we show the performance of all models on Winograd Schema challenge in Table 5 . From the results, we can make the following observations:",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Performances and Analysis",
"sec_num": "3.3"
},
{
"text": "1. Even though methods that leverage structured knowledge can provide explainable solutions to WSC questions, their performance is typically limited by their low coverage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performances and Analysis",
"sec_num": "3.3"
},
{
"text": "2. Different from them, language model based methods represent knowledge contained in human language with an implicit approach, and thus do not have the matching issue and achieve better overall performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performances and Analysis",
"sec_num": "3.3"
},
{
"text": "3. In general, fine-tuning pre-trained language representation models (e.g., BERT and RoBERTa) with similar datasets (e.g., DPR and Wino-Grande) achieve the current SOTA performances and two observations can be made: (1) The stronger the pre-trained model, the better the performance. This observation shows that current language representation models can indeed cover commonsense knowledge and along with the increase of their representation ability (e.g., deeper model or larger pre-training corpus like RoBERTa), more commonsense knowledge can be effectively represented. (2) The larger the fine-tuning dataset, the better the performance. This is probably because the knowledge about some WSC questions is only covered by Wino-Grande but not in DPR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performances and Analysis",
"sec_num": "3.3"
},
{
"text": "To investigate the reason behind WinoGrande's success, we divide WinoGrande into subsets based on the instances' relevance towards WSC. Assuming that the instance set of WinoGrande and WSC are I W G and I W SC respectively, for each instance i \u2208 I W G , we design its relevance score as follows: where O(i, i ) is the unigram co-occurrence of i and i and L() is the instance length. We use the released code and dataset to conduct the experiments and follow all hyper-parameters as the original paper (Sakaguchi et al., 2020) except the batch size 10 . From the results in Table 6 , we can observe that: (1) The most relevant instances contribute the most to the success. In some learning rate settings, it performs similar to or even better than the overall set; (2) Less relevant instances also help, which shows that current fine-tuning approach is not just fitting the data but also learning some underneath knowledge about solving the task from the data;",
"cite_spans": [
{
"start": 501,
"end": 525,
"text": "(Sakaguchi et al., 2020)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 573,
"end": 580,
"text": "Table 6",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Performances and Analysis",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R W SC (i) = M ax( O 2 (i, i ) L(i) \u2022 L(i ) , i \u2208 I W SC ),",
"eq_num": "(1)"
}
],
"section": "Performances and Analysis",
"sec_num": "3.3"
},
{
"text": "(3) The model can be sensitive to the hyper-parameters (i.e., learning rate). Different subsets have different best hyper-parameters and the learning process can easily fail with a bad hyper-parameter choice. To achieve a good performance on a fixed dataset like WSC, we can tune the hyper-parameters. But to create a reliable PCR system we can rely on in real life, we probably need a more robust model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performances and Analysis",
"sec_num": "3.3"
},
{
"text": "Besides the ordinary and hard PCR tasks, PCR is also an important research topic for many special purposes (e.g., gender bias) or in some special settings (e.g., Visual-aware PCR). In this section, we briefly introduce these tasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other PCR Tasks",
"sec_num": "4"
},
{
"text": "1. PCR in the Medical Domain: I2b2 (Uzuner et al., 2012) is a dataset that focuses on identify-ing coreference relations in electronic medical records. As reported in (Zhang et al., 2019c) , the training set of I2b2 contains 2,024 third personal pronouns, 685 possessive pronouns, and 270 demonstrative pronouns. Its test set contains 1,244 third personal pronouns, 367 possessive pronouns, and 166 demonstrative pronouns. As a dataset in a relatively narrow domain, the usage of domain knowledge becomes important. As shown in (Zhang et al., 2019c) , i2b2 can be used as an additional dataset to evaluate models' cross-domain abilities.",
"cite_spans": [
{
"start": 35,
"end": 56,
"text": "(Uzuner et al., 2012)",
"ref_id": "BIBREF43"
},
{
"start": 167,
"end": 188,
"text": "(Zhang et al., 2019c)",
"ref_id": "BIBREF51"
},
{
"start": 528,
"end": 549,
"text": "(Zhang et al., 2019c)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other PCR Tasks",
"sec_num": "4"
},
{
"text": "2. PCR for Machine Translation: ParCor (Guillou et al., 2014) and ParCorFull (Lapshinova-Koltunski et al., 2018) are datasets focusing on PCR in parallel multi-lingual datasets, which can be used in downstream machine translation tasks. Different from other PCR works, it focuses on how to leverage the PCR results for better translation rather than how to solve the PCR problem.",
"cite_spans": [
{
"start": 39,
"end": 61,
"text": "(Guillou et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 77,
"end": 112,
"text": "(Lapshinova-Koltunski et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other PCR Tasks",
"sec_num": "4"
},
{
"text": "3. PCR for Chatbots: CIC (Chen and Choi, 2016) is a dataset focusing on identifying coreference relations in multi-party conversations. Compared with the ordinary PCR tasks, which are mostly annotated on formal textual data (e.g., newswire), identifying coreference relation in conversation is more challenging.",
"cite_spans": [
{
"start": 25,
"end": 46,
"text": "(Chen and Choi, 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other PCR Tasks",
"sec_num": "4"
},
{
"text": "4. PCR for Studying Gender Bias: Nowadays, gender bias has been a hot research topic in the NLP community (Rudinger et al., 2018; Zhao et al., 2018) . WinoGender (Rudinger et al., 2018) is among the most popular works. The setting of WinoGender is similar to the setting of WSC (Levesque et al., 2012) , where each sentence contains one target pronoun and two candidate noun phrases and the models are required to select the correct antecedent from the two candidates. But the purpose is different. WSC aims at evaluating models' abilities to understand commonsense knowledge, while Wino-Gender aims at evaluating how well models can predict without the influence of gender bias. The experiments show that some gender bias (e.g., 'he' is more likely to be predicted to be the doctor rather than the nurse by the machine) indeed exists in pre-trained language representation models. Such observation is astonishing and motivates the community to think about how to minimize the influence of such gender bias.",
"cite_spans": [
{
"start": 106,
"end": 129,
"text": "(Rudinger et al., 2018;",
"ref_id": "BIBREF38"
},
{
"start": 130,
"end": 148,
"text": "Zhao et al., 2018)",
"ref_id": "BIBREF52"
},
{
"start": 162,
"end": 185,
"text": "(Rudinger et al., 2018)",
"ref_id": "BIBREF38"
},
{
"start": 278,
"end": 301,
"text": "(Levesque et al., 2012)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other PCR Tasks",
"sec_num": "4"
},
{
"text": "5. Visual-aware PCR: Recently, a visual-aware PCR dataset , which evaluates how well models can ground pronouns to visual objects, was proposed. Similar to CIC (Chen and Choi, 2016) , visual-aware PCR also focuses on pronouns in daily dialogue, where the language usage is informal and a lot of background knowledge can be missing. For example, if one speaker refers to something both speakers can see, they may directly use a pronoun rather than introduce it first. In such a case, a pronoun may refer to not mentioned objects in the conversation. As analyzed in the original paper, 15% of pronouns in conversations refer to not mentioned objects and for them, leveraging the visual context information becomes crucial. As shown in (Kottur et al., 2018) , grounding pronouns to the visual objects can significantly help the model to better understand the dialog and generate the better response, which further proves that visual-aware PCR is an important research topic to explore.",
"cite_spans": [
{
"start": 160,
"end": 181,
"text": "(Chen and Choi, 2016)",
"ref_id": "BIBREF2"
},
{
"start": 733,
"end": 754,
"text": "(Kottur et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other PCR Tasks",
"sec_num": "4"
},
{
"text": "In this paper, we survey the progress on the pronoun coreference resolution (PCR) task, and analyze the improvement and limitations of existing approaches. Experiments and analysis on both the ordinary and hard PCR tasks demonstrate that even though we have made great progress according to the main evaluation metric, the PCR task is still far away from being solved. For example, all bestperforming ordinary PCR models struggle on the cross-domain setting as well as infrequent objects. Also, even though fine-tuning pre-trained language representation models can achieve near-human performance on WSC, it can be sensitive to the hyperparameters. All codes will be released to encourage the research on the PCR task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "One exception is the entities that are related to organizations. For example, \"they\" can refer to \"the company\". Another exception is to prevent generic masculine, where \"they\" can refer to singular entity in genderneutral language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Some datasets (e.g., CoNLL-2012 shared task) are originally designed for the general coreference resolution task. Nonetheless, we can easily convert them into a PCR task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": ". Structured Knowledge: Another limitation of the end-to-end model is that its success heavily relies on the quality and coverage of the training data. However, in real applications, it is labor-intensive and almost impossible to annotate a large-scale dataset to contain all scenar-5 These models once achieved better performance either on the general coreference resolution task or the PCR task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The latest version of WSC has 284 questions, but as all the following works are evaluated based on the 273-question version, we still use the 273-question version in this survey.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This dataset is also referred to as WSCR in some works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The original batch size is 16 and our batch size is 4 due to the GPU memory limitation, so the experimental result is slightly different from the one reported in the original paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This paper was supported by Early Career Scheme (ECS, No. 26206717), General Research Fund (GRF, No. 16211520), and Research Impact Fund (RIF,) from the Research Grants Council (RGC) of Hong Kong.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Understanding the value of features for coreference resolution",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Bengtson",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP 2008",
"volume": "",
"issue": "",
"pages": "294--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Bengtson and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Pro- ceedings of EMNLP 2008, pages 294-303.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A constrained latent variable model for coreference resolution",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Rajhans",
"middle": [],
"last": "Samdani",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP 2013",
"volume": "",
"issue": "",
"pages": "601--612",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai-Wei Chang, Rajhans Samdani, and Dan Roth. 2013. A constrained latent variable model for coref- erence resolution. In Proceedings of EMNLP 2013, pages 601-612.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Character identification on multiparty conversation: Identifying mentions of characters in TV shows",
"authors": [
{
"first": "Yu-Hsin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jinho",
"middle": [
"D"
],
"last": "Choi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of SIGDIAL 2016",
"volume": "",
"issue": "",
"pages": "90--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu-Hsin Chen and Jinho D. Choi. 2016. Character identification on multiparty conversation: Identify- ing mentions of characters in TV shows. In Proceed- ings of SIGDIAL 2016, pages 90-100.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Overview of muc-7/met-2",
"authors": [
{
"first": "A",
"middle": [],
"last": "Nancy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chinchor",
"suffix": ""
}
],
"year": 1998,
"venue": "the Seventh Message Understanding Conference(MUC7)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nancy A Chinchor. 1998. Overview of muc-7/met- 2. In the Seventh Message Understanding Confer- ence(MUC7).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Entity-centric coreference resolution with model stacking",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL 2015",
"volume": "",
"issue": "",
"pages": "1405--1415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In Proceedings of ACL 2015, pages 1405- 1415.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deep reinforcement learning for mention-ranking coreference models",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP 2016",
"volume": "",
"issue": "",
"pages": "2256--2262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking coref- erence models. In Proceedings of EMNLP 2016, pages 2256-2262.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL-HLT 2019, pages 4171-4186.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The automatic content extraction (ACE) program -tasks, data, and evaluation",
"authors": [
{
"first": "George",
"middle": [
"R"
],
"last": "Doddington",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"A"
],
"last": "Przybocki",
"suffix": ""
},
{
"first": "Lance",
"middle": [
"A"
],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [
"M"
],
"last": "Strassel",
"suffix": ""
},
{
"first": "Ralph",
"middle": [
"M"
],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George R. Doddington, Alexis Mitchell, Mark A. Przy- bocki, Lance A. Ramshaw, Stephanie M. Strassel, and Ralph M. Weischedel. 2004. The automatic con- tent extraction (ACE) program -tasks, data, and eval- uation. In Proceedings of LREC 2004.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A knowledge hunting framework for common sense reasoning",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Emami",
"suffix": ""
},
{
"first": "Noelia",
"middle": [
"De"
],
"last": "",
"suffix": ""
},
{
"first": "La",
"middle": [],
"last": "Cruz",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
},
{
"first": "Jackie Chi Kit",
"middle": [],
"last": "Cheung",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP 2018",
"volume": "",
"issue": "",
"pages": "1949--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Emami, Noelia De La Cruz, Adam Trischler, Ka- heer Suleman, and Jackie Chi Kit Cheung. 2018. A knowledge hunting framework for common sense reasoning. In Proceedings of EMNLP 2018, pages 1949-1958.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The knowref coreference corpus: Removing gender and number cues for difficult pronominal anaphora resolution",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Emami",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Trichelair",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
},
{
"first": "Hannes",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Jackie Chi Kit",
"middle": [],
"last": "Cheung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL 2019",
"volume": "",
"issue": "",
"pages": "3952--3961",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Emami, Paul Trichelair, Adam Trischler, Ka- heer Suleman, Hannes Schulz, and Jackie Chi Kit Cheung. 2019. The knowref coreference corpus: Removing gender and number cues for difficult pronominal anaphora resolution. In Proceedings of ACL 2019, pages 3952-3961.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Wikicoref: An english coreference-annotated corpus of wikipedia articles",
"authors": [
{
"first": "Abbas",
"middle": [],
"last": "Ghaddar",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abbas Ghaddar and Philippe Langlais. 2016. Wiki- coref: An english coreference-annotated corpus of wikipedia articles. In Proceedings of LREC 2016.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Message understanding conference-6: A brief history",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "Beth",
"middle": [],
"last": "Sundheim",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of COLING 1996",
"volume": "",
"issue": "",
"pages": "466--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Grishman and Beth Sundheim. 1996. Message understanding conference-6: A brief history. In Pro- ceedings of COLING 1996, pages 466-471.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Parcor 1.0: A parallel pronoun-coreference corpus to support statistical MT",
"authors": [
{
"first": "Liane",
"middle": [],
"last": "Guillou",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"L"
],
"last": "Webber",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LREC 2014",
"volume": "",
"issue": "",
"pages": "3191--3198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liane Guillou, Christian Hardmeier, Aaron Smith, J\u00f6rg Tiedemann, and Bonnie L. Webber. 2014. Parcor 1.0: A parallel pronoun-coreference corpus to sup- port statistical MT. In Proceedings of LREC 2014, pages 3191-3198.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Forms of anaphoric reference to organisational named entities: Hoping to widen appeal, they diversified",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Bevacqua",
"suffix": ""
},
{
"first": "Sharid",
"middle": [],
"last": "Lo\u00e1iciga",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rohde",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "36--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Hardmeier, Luca Bevacqua, Sharid Lo\u00e1iciga, and Hannah Rohde. 2018. Forms of anaphoric ref- erence to organisational named entities: Hoping to widen appeal, they diversified. In Proceedings of the Seventh Named Entities Workshop, pages 36-40.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Resolving pronoun references",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jerry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hobbs",
"suffix": ""
}
],
"year": 1978,
"venue": "Lingua",
"volume": "44",
"issue": "4",
"pages": "311--338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry R Hobbs. 1978. Resolving pronoun references. Lingua, 44(4):311-338.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Spanbert: Improving pre-training by representing and predicting spans",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Trans. Assoc. Comput. Linguistics",
"volume": "8",
"issue": "",
"pages": "64--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Lin- guistics, 8:64-77.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Coreference resolution with entity equalization",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Kantor",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL 2019",
"volume": "",
"issue": "",
"pages": "673--677",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Kantor and Amir Globerson. 2019. Coreference resolution with entity equalization. In Proceedings of ACL 2019, pages 673-677.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A surprisingly robust trick for the winograd schema challenge",
"authors": [
{
"first": "Ana-Maria",
"middle": [],
"last": "Vid Kocijan",
"suffix": ""
},
{
"first": "Oana-Maria",
"middle": [],
"last": "Cretu",
"suffix": ""
},
{
"first": "Yordan",
"middle": [],
"last": "Camburu",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Yordanov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lukasiewicz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL 2019",
"volume": "",
"issue": "",
"pages": "4837--4842",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, and Thomas Lukasiewicz. 2019. A surprisingly robust trick for the winograd schema challenge. In Proceedings of ACL 2019, pages 4837-4842.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Survey: Anaphora with non-nominal antecedents in computational linguistics: a Survey",
"authors": [
{
"first": "Varada",
"middle": [],
"last": "Kolhatkar",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roussel",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Dipper",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Zinsmeister",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "44",
"issue": "3",
"pages": "547--612",
"other_ids": {
"DOI": [
"10.1162/coli_a_00327"
]
},
"num": null,
"urls": [],
"raw_text": "Varada Kolhatkar, Adam Roussel, Stefanie Dipper, and Heike Zinsmeister. 2018. Survey: Anaphora with non-nominal antecedents in computational lin- guistics: a Survey. Computational Linguistics, 44(3):547-612.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Visual coreference resolution in visual dialog using neural module networks",
"authors": [
{
"first": "Satwik",
"middle": [],
"last": "Kottur",
"suffix": ""
},
{
"first": "M",
"middle": [
"F"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Moura",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rohrbach",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ECCV 2018",
"volume": "",
"issue": "",
"pages": "160--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satwik Kottur, Jos\u00e9 M. F. Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2018. Visual corefer- ence resolution in visual dialog using neural mod- ule networks. In Proceedings of ECCV 2018, pages 160-178.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "ParCorFull: a parallel corpus annotated with full coreference",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Lapshinova-Koltunski",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
},
{
"first": "Pauline",
"middle": [],
"last": "Krielke",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekaterina Lapshinova-Koltunski, Christian Hardmeier, and Pauline Krielke. 2018. ParCorFull: a parallel corpus annotated with full coreference. In Proceed- ings of LREC 2018.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Deterministic coreference resolution based on entity-centric, precision-ranked rules",
"authors": [
{
"first": "Heeyoung",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Angel",
"middle": [
"X"
],
"last": "Chang",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Peirsman",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "4",
"pages": "885--916",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heeyoung Lee, Angel X. Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolu- tion based on entity-centric, precision-ranked rules. Computational Linguistics, 39(4):885-916.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EMNLP 2017",
"volume": "",
"issue": "",
"pages": "188--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference resolu- tion. In Proceedings of EMNLP 2017, pages 188- 197.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Higher-order coreference resolution with coarse-tofine inference",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL-HLT 2018",
"volume": "",
"issue": "",
"pages": "687--692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to- fine inference. In Proceedings of NAACL-HLT 2018, pages 687-692.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The winograd schema challenge",
"authors": [
{
"first": "Hector",
"middle": [],
"last": "Levesque",
"suffix": ""
},
{
"first": "Ernest",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Leora",
"middle": [],
"last": "Morgenstern",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceeedings of KRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hector Levesque, Ernest Davis, and Leora Morgen- stern. 2012. The winograd schema challenge. In Proceeedings of KRR 2012.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Conceptnet-a practical commonsense reasoning tool-kit",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Push",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2004,
"venue": "BT technology journal",
"volume": "22",
"issue": "4",
"pages": "211--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Liu and Push Singh. 2004. Conceptnet-a practi- cal commonsense reasoning tool-kit. BT technology journal, 22(4):211-226.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Commonsense knowledge enhanced embeddings for solving pronoun disambiguation problems in winograd schema challenge",
"authors": [
{
"first": "Quan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.04146"
]
},
"num": null,
"urls": [],
"raw_text": "Quan Liu, Hui Jiang, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, and Yu Hu. 2016. Commonsense knowledge enhanced embeddings for solving pronoun disam- biguation problems in winograd schema challenge. arXiv preprint arXiv:1611.04146.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Anaphora resolution in machine translation",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 1995,
"venue": "TMMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruslan Mitkov et al. 1995. Anaphora resolution in ma- chine translation. In TMMT.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Supervised ranking for pronoun resolution: Some recent improvements",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of AAAI 2005",
"volume": "",
"issue": "",
"pages": "1081--1086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng. 2005. Supervised ranking for pronoun res- olution: Some recent improvements. In Proceedings of AAAI 2005, pages 1081-1086.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL-HLT 2018",
"volume": "",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of NAACL-HLT 2018, pages 2227-2237.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A crowdsourced corpus of multiple judgments and disagreement on anaphoric interpretation",
"authors": [
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Chamberlain",
"suffix": ""
},
{
"first": "Silviu",
"middle": [],
"last": "Paun",
"suffix": ""
},
{
"first": "Juntao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Uma",
"suffix": ""
},
{
"first": "Udo",
"middle": [],
"last": "Kruschwitz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019",
"volume": "",
"issue": "",
"pages": "1778--1789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimo Poesio, Jon Chamberlain, Silviu Paun, Juntao Yu, Alexandra Uma, and Udo Kruschwitz. 2019. A crowdsourced corpus of multiple judgments and dis- agreement on anaphoric interpretation. In Proceed- ings of NAACL-HLT 2019, pages 1778-1789.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution",
"authors": [
{
"first": "Paolo",
"middle": [],
"last": "Simone",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of NAACL-HLT 2006",
"volume": "",
"issue": "",
"pages": "192--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution. In Proceed- ings of NAACL-HLT 2006, pages 192-199.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of CoNLL 2012",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll- 2012 shared task: Modeling multilingual unre- stricted coreference in ontonotes. In Proceedings of CoNLL 2012, pages 1-40.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Conll-2011 shared task: Modeling unrestricted coreference in ontonotes",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [
"A"
],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Ralph",
"middle": [
"M"
],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Lance A. Ramshaw, Mitchell P. Mar- cus, Martha Palmer, Ralph M. Weischedel, and Nian- wen Xue. 2011. Conll-2011 shared task: Modeling unrestricted coreference in ontonotes. In Proceed- ings of CoNLL 2011, pages 1-27.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A multipass sieve for coreference resolution",
"authors": [
{
"first": "Heeyoung",
"middle": [],
"last": "Karthik Raghunathan",
"suffix": ""
},
{
"first": "Sudarshan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Nate",
"middle": [],
"last": "Rangarajan",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Raghunathan, Heeyoung Lee, Sudarshan Ran- garajan, Nate Chambers, Mihai Surdeanu, Dan Ju- rafsky, and Christopher D. Manning. 2010. A multi- pass sieve for coreference resolution. In Proceed- ings of EMNLP 2010.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Resolving complex cases of definite pronouns: The winograd schema challenge",
"authors": [
{
"first": "Altaf",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP-CoNLL 2012",
"volume": "",
"issue": "",
"pages": "777--789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Altaf Rahman and Vincent Ng. 2012. Resolving complex cases of definite pronouns: The wino- grad schema challenge. In Proceedings of EMNLP- CoNLL 2012, pages 777-789.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Gender bias in coreference resolution",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Leonard",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT",
"volume": "2",
"issue": "",
"pages": "8--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Pa- pers), pages 8-14.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "WINOGRANDE: an adversarial winograd schema challenge at scale",
"authors": [
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Ronan",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bras",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of AAAI 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhaga- vatula, and Yejin Choi. 2020. WINOGRANDE: an adversarial winograd schema challenge at scale. In Proceedings of AAAI 2020.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Two uses of anaphora resolution in summarization",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mijail",
"suffix": ""
},
{
"first": "Karel",
"middle": [],
"last": "Kabadjov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jevzek",
"suffix": ""
}
],
"year": 2007,
"venue": "Information Processing & Management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef Steinberger, Massimo Poesio, Mijail A Kabadjov, and Karel Jevzek. 2007. Two uses of anaphora reso- lution in summarization. Information Processing & Management.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "A machine learning approach to pronoun resolution in spoken dialogue",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL 2003",
"volume": "",
"issue": "",
"pages": "168--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Strube and Christoph M\u00fcller. 2003. A ma- chine learning approach to pronoun resolution in spoken dialogue. In Proceedings of ACL 2003, pages 168-175.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A simple method for commonsense reasoning",
"authors": [
{
"first": "H",
"middle": [],
"last": "Trieu",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Trinh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trieu H. Trinh and Quoc V. Le. 2018. A sim- ple method for commonsense reasoning. CoRR, abs/1806.02847.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Evaluating the state of the art in coreference resolution for electronic medical records",
"authors": [
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "Andreea",
"middle": [],
"last": "Bodnari",
"suffix": ""
},
{
"first": "Shuying",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Forbush",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Pestian",
"suffix": ""
},
{
"first": "Brett R",
"middle": [],
"last": "South",
"suffix": ""
}
],
"year": 2012,
"venue": "J. Am. Medical Informatics Assoc",
"volume": "19",
"issue": "5",
"pages": "786--791",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ozlem Uzuner, Andreea Bodnari, Shuying Shen, Tyler Forbush, John Pestian, and Brett R South. 2012. Evaluating the state of the art in coreference resolu- tion for electronic medical records. J. Am. Medical Informatics Assoc., 19(5):786-791.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Using Lexical and Encyclopedic Knowledge",
"authors": [
{
"first": "Yannick",
"middle": [],
"last": "Versley",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "393--429",
"other_ids": {
"DOI": [
"10.1007/978-3-662-47909-4_14"
]
},
"num": null,
"urls": [],
"raw_text": "Yannick Versley, Massimo Poesio, and Simone Ponzetto. 2016. Using Lexical and Encyclopedic Knowledge, pages 393-429. Springer Berlin Heidel- berg, Berlin, Heidelberg.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Learning global features for coreference resolution",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Stuart",
"middle": [
"M"
],
"last": "Shieber",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-HLT 2016",
"volume": "",
"issue": "",
"pages": "994--1004",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coref- erence resolution. In Proceedings of NAACL-HLT 2016, pages 994-1004.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Coreferential reasoning learning for language representation",
"authors": [
{
"first": "Deming",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jiaju",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Zhenghao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deming Ye, Yankai Lin, Jiaju Du, Zhenghao Liu, Peng Li, Maosong Sun, and Zhiyuan Liu. 2020. Coref- erential reasoning learning for language representa- tion. In Proceedings of EMNLP 2020.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "What you see is what you get: Visual pronoun coreference resolution in dialogues",
"authors": [
{
"first": "Xintong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Hongming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Changshui",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of EMNLP-IJCNLP 2019",
"volume": "",
"issue": "",
"pages": "5122--5131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xintong Yu, Hongming Zhang, Yangqiu Song, Yan Song, and Changshui Zhang. 2019. What you see is what you get: Visual pronoun coreference resolution in dialogues. In Proceedings of EMNLP-IJCNLP 2019, pages 5122-5131.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "SP-10K: A large-scale evaluation set for selectional preference acquisition",
"authors": [
{
"first": "Hongming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hantian",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL 2019",
"volume": "",
"issue": "",
"pages": "722--731",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongming Zhang, Hantian Ding, and Yangqiu Song. 2019a. SP-10K: A large-scale evaluation set for se- lectional preference acquisition. In Proceedings of ACL 2019, pages 722-731.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "ASER: A largescale eventuality knowledge graph",
"authors": [
{
"first": "Hongming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Haojie",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Cane",
"middle": [],
"last": "Wing",
"suffix": ""
},
{
"first": "-Ki",
"middle": [],
"last": "Leung",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of WWW 2020",
"volume": "",
"issue": "",
"pages": "201--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongming Zhang, Xin Liu, Haojie Pan, Yangqiu Song, and Cane Wing-Ki Leung. 2020. ASER: A large- scale eventuality knowledge graph. In Proceedings of WWW 2020, pages 201-211.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Incorporating context and external knowledge for pronoun coreference resolution",
"authors": [
{
"first": "Hongming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019",
"volume": "",
"issue": "",
"pages": "872--881",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongming Zhang, Yan Song, and Yangqiu Song. 2019b. Incorporating context and external knowl- edge for pronoun coreference resolution. In Pro- ceedings of NAACL-HLT 2019, pages 872-881.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Knowledge-aware pronoun coreference resolution",
"authors": [
{
"first": "Hongming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL 2019",
"volume": "",
"issue": "",
"pages": "867--876",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongming Zhang, Yan Song, Yangqiu Song, and Dong Yu. 2019c. Knowledge-aware pronoun coreference resolution. In Proceedings of ACL 2019, pages 867- 876.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL-HLT 2018",
"volume": "",
"issue": "",
"pages": "15--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debias- ing methods. In Proceedings of NAACL-HLT 2018, pages 15-20.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "WSC question examples."
},
"TABREF1": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "The performance of the End-to-end model on the CoNLL-2012 shared task coreference resolution dataset. The model's performances of different coreference types are reported separately."
},
"TABREF4": {
"content": "<table><tr><td>Model</td><td>Training data</td><td colspan=\"2\">Test data CoNLL i2b2</td></tr><tr><td>End-to-end</td><td>CoNLL i2b2</td><td>72.1 20.0</td><td>75.2 92.3</td></tr><tr><td>+ KG</td><td>CoNLL i2b2</td><td>75.9 42.7</td><td>80.9 95.2</td></tr><tr><td>+ SpanBERT</td><td>CoNLL i2b2</td><td>79.6 28.5</td><td>40.8 80.5</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Performances of different models on the CoNLL-2012 shared task. Precision (P), recall (R), and the F1 score are reported. Numbers of different types of pronouns in the test set are shown in the brackets. Best models are indicated with the bold font."
},
"TABREF5": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Models' performance (in F1 score) in crossdomain setting on different training/test data."
},
"TABREF7": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Influence of the frequency."
},
"TABREF9": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Performances of different models on the 273-question version WSC. NA means that the model cannot give a prediction, A p means the accuracy of predict examples without NA examples. And A o the overall accuracy of all examples (i.e., Correct, Wrong, and NA examples)"
},
"TABREF11": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Performance of fine-tuning RoBERTa with different learning rates and three subsets of WinoGrande split by their instances' relevance towards the original WSC. L.R. means learning rate and Rel. means relevance to WSC data. Numbers of instances are shown in brackets. Best performed datasets for each learning rate is indicated with the bold font."
}
}
}
}