| { |
| "paper_id": "Q18-1042", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:10:43.157016Z" |
| }, |
| "title": "Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns", |
| "authors": [ |
| { |
| "first": "Kellie", |
| "middle": [], |
| "last": "Webster", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "websterk@google.com" |
| }, |
| { |
| "first": "Marta", |
| "middle": [], |
| "last": "Recasens", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "recasens@google.com" |
| }, |
| { |
| "first": "Vera", |
| "middle": [], |
| "last": "Axelrod", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "vaxelrod@google.com" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Baldridge", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "jasonbaldridge@google.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Coreference resolution is an important task for natural language understanding, and the resolution of ambiguous pronouns a longstanding challenge. Nonetheless, existing corpora do not capture ambiguous pronouns in sufficient volume or diversity to accurately indicate the practical utility of models. Furthermore, we find gender bias in existing corpora and systems favoring masculine entities. To address this, we present and release GAP, a genderbalanced labeled corpus of 8,908 ambiguous pronoun-name pairs sampled to provide diverse coverage of challenges posed by real-world text. We explore a range of baselines that demonstrate the complexity of the challenge, the best achieving just 66.9% F1. We show that syntactic structure and continuous neural models provide promising, complementary cues for approaching the challenge.", |
| "pdf_parse": { |
| "paper_id": "Q18-1042", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Coreference resolution is an important task for natural language understanding, and the resolution of ambiguous pronouns a longstanding challenge. Nonetheless, existing corpora do not capture ambiguous pronouns in sufficient volume or diversity to accurately indicate the practical utility of models. Furthermore, we find gender bias in existing corpora and systems favoring masculine entities. To address this, we present and release GAP, a genderbalanced labeled corpus of 8,908 ambiguous pronoun-name pairs sampled to provide diverse coverage of challenges posed by real-world text. We explore a range of baselines that demonstrate the complexity of the challenge, the best achieving just 66.9% F1. We show that syntactic structure and continuous neural models provide promising, complementary cues for approaching the challenge.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Coreference resolution involves linking referring expressions that evoke the same discourse entity, as defined in shared tasks such as CoNLL 2011/2012 (Pradhan et al., 2012) and MUC (Grishman and Sundheim, 1996) . Unfortunately, high scores on these tasks do not necessarily translate into acceptable performance for downstream applications such as machine translation (Guillou, 2012) and fact extraction (Nakayama, 2008) . In particular, high-scoring systems successfully identify coreference relationships between string-matching proper names, but fare worse on anaphoric mentions such as pronouns and common noun phrases (Stoyanov et al., 2009; Rahman and Ng, 2012; Durrett and Klein, 2013) .", |
| "cite_spans": [ |
| { |
| "start": 151, |
| "end": 173, |
| "text": "(Pradhan et al., 2012)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 182, |
| "end": 211, |
| "text": "(Grishman and Sundheim, 1996)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 369, |
| "end": 384, |
| "text": "(Guillou, 2012)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 405, |
| "end": 421, |
| "text": "(Nakayama, 2008)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 624, |
| "end": 647, |
| "text": "(Stoyanov et al., 2009;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 648, |
| "end": 668, |
| "text": "Rahman and Ng, 2012;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 669, |
| "end": 693, |
| "text": "Durrett and Klein, 2013)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We consider the problem of resolving gendered ambiguous pronouns in English, such as she 1 in:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In May, Fujisawa joined Mari Motohashi's rink as the team's skip, moving back from Karuizawa to Kitami where she had spent her junior days.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "With this scope, we make three key contributions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We design an extensible, language-independent mechanism for extracting challenging ambiguous pronouns from text. \u2022 We build and release GAP, a human-labeled corpus of 8,908 ambiguous pronoun-name pairs derived from Wikipedia. 2 This data set targets the challenges of resolving naturally occurring ambiguous pronouns and rewards systems that are gender-fair. \u2022 We run four state-of-the-art coreference resolvers and several competitive simple baselines on GAP to understand limitations in current modeling, including gender bias. We find that syntactic structure and transformer models (Vaswani et al., 2017) provide promising, complementary cues for approaching GAP.", |
| "cite_spans": [ |
| { |
| "start": 217, |
| "end": 229, |
| "text": "Wikipedia. 2", |
| "ref_id": null |
| }, |
| { |
| "start": 588, |
| "end": 610, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Coreference resolution decisions can drastically alter how automatic systems process text. Biases in automatic systems have caused a wide range of underrepresented groups to be served in an inequitable way by downstream applications (Hardt, 2014) . We take the construction of the new GAP corpus as an opportunity to reduce gender bias in coreference data sets; in this way, GAP can promote equitable modeling of reference phenomena complementary to the recent work of Zhao et al. (2018) and Rudinger et al. (2018) .", |
| "cite_spans": [ |
| { |
| "start": 233, |
| "end": 246, |
| "text": "(Hardt, 2014)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 469, |
| "end": 487, |
| "text": "Zhao et al. (2018)", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 492, |
| "end": 514, |
| "text": "Rudinger et al. (2018)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Such approaches promise to improve equity of downstream models, such as triple extraction for knowledge-base populations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Existing datasets do not capture ambiguous pronouns in sufficient volume or diversity to benchmark systems for practical applications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Winograd schemas (Levesque et al., 2012) are closely related to our work as they contain ambiguous pronouns. These are pairs of short texts with an ambiguous pronoun and a special word (in square brackets) that switches its referent:", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 40, |
| "text": "(Levesque et al., 2012)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Sets with Ambiguous Pronouns", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "(2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Sets with Ambiguous Pronouns", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The trophy would not fit in the brown suitcase because it was too [big/small].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Sets with Ambiguous Pronouns", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The Definite Pronoun Resolution Data Set (Rahman and Ng, 2012) comprises 943 Winograd schemas written by undergraduate students and later extended by Peng et al. (2015) . The First Winograd Schema Challenge (Morgenstern et al., 2016) released 60 examples adapted from published literary works (Pronoun Disambiguation Problem) 3 and 285 manually constructed schemas (Winograd Schema Challenge). 4 More recently, Rudinger et al. (2018) and Zhao et al. (2018) have created two Winograd schema-style datasets containing 720 and 3,160 sentences, respectively, where each sentence contains a gendered pronoun and two occupation (or participant) antecedent candidates that break occupational gender stereotypes. Overall, ambiguous pronoun datasets have been limited in size and, most notably, consist only of manually constructed examples that do not necessarily reflect the challenges faced by systems in the wild. In contrast, the largest and most widely used coreference corpus, OntoNotes (Pradhan et al., 2007) , is general purpose. In OntoNotes, simpler high-frequency coreference examples (e.g., those captured by string matching) greatly outnumber examples of ambiguous pronouns, which obscures performance results on that key class (Stoyanov et al., 2009; Rahman and Ng, 2012) . Ambiguous pronouns greatly impact main entity resolution 3 https://cs.nyu.edu/faculty/davise/papers/ WinogradSchemas/PDPChallenge2016.xml.", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 62, |
| "text": "(Rahman and Ng, 2012)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 150, |
| "end": 168, |
| "text": "Peng et al. (2015)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 207, |
| "end": 233, |
| "text": "(Morgenstern et al., 2016)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 411, |
| "end": 433, |
| "text": "Rudinger et al. (2018)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 438, |
| "end": 456, |
| "text": "Zhao et al. (2018)", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 985, |
| "end": 1007, |
| "text": "(Pradhan et al., 2007)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 1233, |
| "end": 1256, |
| "text": "(Stoyanov et al., 2009;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 1257, |
| "end": 1277, |
| "text": "Rahman and Ng, 2012)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Sets with Ambiguous Pronouns", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "in Wikipedia, the focus of Ghaddar and Langlais (2016a) , who use WikiCoref, a corpus of 30 full articles annotated with coreferences (Ghaddar and Langlais, 2016b) .", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 55, |
| "text": "Ghaddar and Langlais (2016a)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 134, |
| "end": 163, |
| "text": "(Ghaddar and Langlais, 2016b)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Sets with Ambiguous Pronouns", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "GAP examples are not strictly Winograd schemas because they have no reference-flipping word. Nonetheless, they contain two person named entities of the same gender and an ambiguous pronoun that may refer to either (or neither). As such, they represent a similarly difficult challenge and require the same inferential capabilities. More importantly, GAP is larger than existing Winograd schema datasets, and the examples are from naturally occurring Wikipedia text. GAP complements OntoNotes by providing an extensive targeted dataset of naturally occurring ambiguous pronouns.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Sets with Ambiguous Pronouns", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "State-of-the-art coreference systems struggle to resolve ambiguous pronouns that require world knowledge and commonsense reasoning (Durrett and Klein, 2013) . Past efforts have tried to mine semantic preferences and inferential knowledge via predicate-argument statistics mined from corpora (Dagan and Itai, 1990; Yang et al., 2005) , semantic roles (Kehler et al., 2004; Ponzetto and Strube, 2006) , contextual compatibility features (Liao and Grishman, 2010; Bansal and Klein, 2012) , and event role sequences (Bean and Riloff, 2004; Chambers and Jurafsky, 2008) . These usually bring small improvements in general coreference datasets and larger improvements in targeted Winograd datasets. Rahman and Ng (2012) scored 73.05% precision on their Winograd dataset after incorporating targeted features such as narrative chains, Webbased counts, and selectional preferences. Peng et al. (2015) 's system improved the state of the art to 76.41% by acquiring subject, verb, object and subject/object, verb, verb knowledge triples. In the First Winograd Schema Challenge (Morgenstern et al., 2016) , participants used methods ranging from logical axioms and inference to neural network architectures enhanced with commonsense knowledge (Liu et al., 2017 ), but no system qualified for the second round. Recently, Trinh and Le (2018) have achieved the best results on the Pronoun Disambiguation Problem and Winograd Schema Challenge datasets, achieving 70% and 63.7%, respectively, which are 3 percentage points and 11 percentage points above Liu et al.'s (2017) 's previous state of the art. Their model is an ensemble of word-level and character-level recurrent language models, which, despite not being trained on coreference data, encode commonsense as part of the more general language modeling task. It is unclear how these systems perform on naturally occurring ambiguous pronouns. For example, Trinh and Le's (2018) system relies on choosing a candidate from a pre-specified list, and it would need to be extended to handle the case that the pronoun does not corefer with any given candidate. By releasing GAP, we aim to foster research in this direction, and set several competitive baselines without using targeted resources.", |
| "cite_spans": [ |
| { |
| "start": 131, |
| "end": 156, |
| "text": "(Durrett and Klein, 2013)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 291, |
| "end": 313, |
| "text": "(Dagan and Itai, 1990;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 314, |
| "end": 332, |
| "text": "Yang et al., 2005)", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 350, |
| "end": 371, |
| "text": "(Kehler et al., 2004;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 372, |
| "end": 398, |
| "text": "Ponzetto and Strube, 2006)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 435, |
| "end": 460, |
| "text": "(Liao and Grishman, 2010;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 461, |
| "end": 484, |
| "text": "Bansal and Klein, 2012)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 512, |
| "end": 535, |
| "text": "(Bean and Riloff, 2004;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 536, |
| "end": 564, |
| "text": "Chambers and Jurafsky, 2008)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 693, |
| "end": 713, |
| "text": "Rahman and Ng (2012)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 874, |
| "end": 892, |
| "text": "Peng et al. (2015)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 1067, |
| "end": 1093, |
| "text": "(Morgenstern et al., 2016)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 1232, |
| "end": 1249, |
| "text": "(Liu et al., 2017", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1538, |
| "end": 1557, |
| "text": "Liu et al.'s (2017)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Ambiguous Pronouns", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Although existing corpora have promoted research into coreference resolution, they suffer from gender bias. Specifically, of the over 2,000 gendered pronouns in the OntoNotes test corpus, less than 25% are feminine (Zhao et al., 2018) . The imbalance is more pronounced on the development and training sets, with less than 20% feminine pronouns each. WikiCoref contains only 12% feminine pronouns. In the Definite Pronoun Resolution Dataset training data, 27% of the gendered pronouns are feminine, and the Winograd Schema Challenge datasets contain 28% and 33% feminine examples. Two exceptions are the recent WinoBias (Zhao et al., 2018) and Winogender schemas (Rudinger et al., 2018) datasets, which reveal how occupation-specific gender bias pervades in the majority of publicly available coreference resolution systems by including a balanced number of feminine pronouns that corefer with anti-stereotypical occupations (see Example (3), from WinoBias). These datasets focus on pronominal coreference where the antecedent is a nominal mention, whereas GAP focuses on relations where the antecedent is a named entity.", |
| "cite_spans": [ |
| { |
| "start": 215, |
| "end": 234, |
| "text": "(Zhao et al., 2018)", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 620, |
| "end": 639, |
| "text": "(Zhao et al., 2018)", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 663, |
| "end": 686, |
| "text": "(Rudinger et al., 2018)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bias in Machine Learning", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "(3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bias in Machine Learning", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The salesperson sold some books to the librarian because she was trying the sell them.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bias in Machine Learning", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The pervasive bias in existing datasets is concerning given that learned NLP systems often reflect and even amplify training biases (Bolukbasi et al., 2016; Caliskan et al., 2017; Zhao et al., 2017) . A growing body of work defines notions of fairness, bias, and equality in data and machine-learned systems (Pedreshi et al., 2008; Hardt et al., 2016; Skirpan and Gorelick, 2017; Zafar et al., 2017) , and debiasing strategies include expanding and rebalancing data (Torralba and Efros, 2011; Buda, 2017; Ryu et al., 2017; Shankar et al., 2017) , and balancing performance across subgroups (Dwork et al., 2012) . In the context of coreference resolution, Zhao et al. (2018) have shown how debiasing techniques (e.g., swapping the gender of male pronouns and antecedents in OntoNotes, using debiased word embeddings, balancing Bergsma and Lin's [2006] gender list) succeed at reducing the gender bias of multiple off-the-shelf coreference systems.", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 156, |
| "text": "(Bolukbasi et al., 2016;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 157, |
| "end": 179, |
| "text": "Caliskan et al., 2017;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 180, |
| "end": 198, |
| "text": "Zhao et al., 2017)", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 308, |
| "end": 331, |
| "text": "(Pedreshi et al., 2008;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 332, |
| "end": 351, |
| "text": "Hardt et al., 2016;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 352, |
| "end": 379, |
| "text": "Skirpan and Gorelick, 2017;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 380, |
| "end": 399, |
| "text": "Zafar et al., 2017)", |
| "ref_id": null |
| }, |
| { |
| "start": 466, |
| "end": 492, |
| "text": "(Torralba and Efros, 2011;", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 493, |
| "end": 504, |
| "text": "Buda, 2017;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 505, |
| "end": 522, |
| "text": "Ryu et al., 2017;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 523, |
| "end": 544, |
| "text": "Shankar et al., 2017)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 590, |
| "end": 610, |
| "text": "(Dwork et al., 2012)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 655, |
| "end": 673, |
| "text": "Zhao et al. (2018)", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 826, |
| "end": 850, |
| "text": "Bergsma and Lin's [2006]", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bias in Machine Learning", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We work towards fairness in coreference by releasing a diverse, gender-balanced corpus for ambiguous pronoun resolution and further investigating performance differences by gender, not specifically on pronouns with an occupation antecedent but more generally on gendered pronouns.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bias in Machine Learning", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We create a corpus of 8,908 human-annotated ambiguous pronoun-name examples from Wikipedia. Examples are obtained from a large set of candidate contexts and are filtered through a multistage process designed to improve quality and diversity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GAP Corpus", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We choose Wikipedia as our base dataset given its wide use in natural language understanding tools, but are mindful of its well-known gender biases. Specifically, less than 15% of biographical Wikipedia pages are about women. Furthermore, women are written about differently than men: For example, women's biographies are more likely to mention marriage or divorce (Bamman and Smith, 2014) , abstract terms are more positive in male biographies than female biographies (Wagner et al., 2016) , and articles about women are less central to the article graph (Graells-Garrido et al., 2015).", |
| "cite_spans": [ |
| { |
| "start": 365, |
| "end": 389, |
| "text": "(Bamman and Smith, 2014)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 469, |
| "end": 490, |
| "text": "(Wagner et al., 2016)", |
| "ref_id": "BIBREF48" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GAP Corpus", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Extraction targets three patterns, given in Table 1 , that characterize locally ambiguous pronoun contexts. We limit to singular mentions, gendered non-reflexive pronouns, and names whose head tokens are different from one another. Additionally, we do not allow intruders: There can be no other compatible mention (by gender, number, and entity type) between the pronoun and the two names.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 44, |
| "end": 51, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Extraction and Filtering", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To limit the success of na\u00efve resolution heuristics, we apply a small set of constraints to focus on those pronouns that are truly hard to resolve. Hughie with a specially formulated mix. \u2022 FINALPRO. Both names must be in the same sentence, and the pronoun may appear in the same or directly following sentence. \u2022 MEDIALPRO. The first name must be in the sentence directly preceding the pronoun and the second name, both of which are in the same sentence. To decrease the bias for the pronoun to be coreferential with the first name, the pronoun must be in an initial subordinate clause or be a possessive in an initial prepositional phrase. \u2022 INITIALPRO. All three mentions must be in the same sentence and the pronoun must be in an initial subordinate clause or a possessive in an initial prepositional phrase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extraction and Filtering", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "From the extracted contexts, we sub-sample those to send for annotation. We do this to improve diversity in five dimensions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extraction and Filtering", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 Page Coverage. We retain at most three examples per page-gender pair to ensure a broad coverage of domains. \u2022 Gender. The raw pipeline extracts contexts with a m:f ratio of 9:1. We oversampled feminine pronouns to achieve a 1:1 ratio. 5 \u2022 Extraction Pattern. The raw pipeline output contains seven times more FINALPRO contexts than MEDIALPRO and INITIAL-PROcombined, so we oversampled the latter two to lower the ratio to 6:1:1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extraction and Filtering", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 Page Entity. Pronouns in a Wikipedia page often refer to the entity the page is about. We include such examples in our dataset but balance them 1:1 against examples that do not include mentions of the page entity. \u2022 Coreferent Name. To ensure that mention order is not a cue for systems, our final dataset is balanced for label -namely, whether Name A or Name B is the pronoun's referent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extraction and Filtering", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We applied these constraints to the raw extractions to select 8,604 contexts (17,208 examples) for annotation that were globally balanced in all dimensions (e.g., 1:1 gender ratio in MEDIALPRO extractions). Table 2 summarizes the diversity ratios obtained in the final dataset, whose compilation is described next.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 207, |
| "end": 214, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Extraction and Filtering", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We used a pool of in-house raters for human annotation of our examples. Each example was presented to three workers, who selected one of five labels (Table 3) . Full sentences of at least 50 tokens preceding each example were presented as context (prior context beyond a section break is not included). Rating instructions accompany the dataset release.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 149, |
| "end": 158, |
| "text": "(Table 3)", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Despite workers not being expert linguists, we find good agreement both within workers and between workers and an expert. Inter-annotator agreement was \u03ba = 0.74 on the Fleiss et al. (2003) kappa statistic; in 73% of cases there was full agreement between workers, in 25% of cases two of three workers agreed, and only in 2% of cases was there no consensus. We discard the 194 cases with no consensus. On 30 examples rated by an expert linguist, there was agreement on 28 and one was deemed to be truly ambiguous with the given context. To produce our final dataset, we applied additional high-precision filtering to remove some error cases identified by workers, 6 and discarded the \"Both\" (no ambiguity) and \"Not Sure\" contexts. Given that many of the feminine examples received the \"Both\" label from referents having stage and married names (Example (4)), this unbalanced the number of masculine and feminine examples.", |
| "cite_spans": [ |
| { |
| "start": 168, |
| "end": 188, |
| "text": "Fleiss et al. (2003)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Ruby Buckton is a fictional character from the Australian Channel Seven soap opera Home and Away, played by Rebecca Breeds. She debuted . . .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(4)", |
| "sec_num": null |
| }, |
| { |
| "text": "To correct this, we discarded masculine examples to re-achieve 1:1 gender balance. Additionally, we imposed the constraint that there be one example per Wikipedia article per pronoun form (e.g., his), to reduce similarity between examples. The final counts for each label are given in the second column of Table 3 . Given that the 4,454 contexts each contain two annotated names, this constitutes 8,908 pronoun-name pair labels.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 306, |
| "end": 313, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "(4)", |
| "sec_num": null |
| }, |
| { |
| "text": "We set up the GAP challenge and analyze the applicability of a range of off-the-shelf tools. We find that existing resolvers do not perform well and are biased to favor better resolution of masculine pronouns. We empirically validate the observation that Transformer models (Vaswani et al., 2017) encode coreference relationships, adding to the results by Voita et al. (2018) on machine translation, and Trinh and Le (2018) on language modeling. Furthermore, we show they complement traditional linguistic cues such as syntactic distance and parallelism.", |
| "cite_spans": [ |
| { |
| "start": 274, |
| "end": 296, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 356, |
| "end": 375, |
| "text": "Voita et al. (2018)", |
| "ref_id": "BIBREF47" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "All experiments use the Google Cloud NL API 7 for pre-processing, unless otherwise noted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "GAP is an evaluation corpus and we segment the final dataset into a development and test set of 4,000 examples each; 8 we reserve the remaining 908 examples as a small validation set for parameter tuning. All examples are presented with the URL of the source Wikipedia page, allowing us to define two task settings: snippet-context in which the URL may not be used, and page-context in which it may. Although name spans are given in the data, we urge the community not to treat this as a gold mention or Winograd-style task. That is, systems should detect mentions for inference automatically, and access labeled spans only to output predictions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GAP Challenge", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To reward unbiased modeling, we define two evaluation metrics: F1 score and Bias. Concretely, we calculate F1 score Overall as well as by the gender of the pronoun (Masculine and Feminine). Bias is calculated by taking the ratio of feminine to masculine F1 scores, typically less than 1. 9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GAP Challenge", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The first set of baselines we explore are four representative off-the-shelf coreference systems: the rule-based system of Lee et al. (2013) and three neural resolvers-Clark and Manning (2015), 10 Wiseman et al. (2016), 11 and Lee et al. (2017) . 12 All were trained on OntoNotes and run in as close to their out-of-the-box configuration as possible. 13 System clusters were scored against GAP examples according to whether the cluster 7 https://cloud.google.com/natural-language/. 8 All examples extracted from the same URL are partitioned into the same set.", |
| "cite_spans": [ |
| { |
| "start": 122, |
| "end": 139, |
| "text": "Lee et al. (2013)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 193, |
| "end": 195, |
| "text": "10", |
| "ref_id": null |
| }, |
| { |
| "start": 219, |
| "end": 243, |
| "text": "11 and Lee et al. (2017)", |
| "ref_id": null |
| }, |
| { |
| "start": 246, |
| "end": 248, |
| "text": "12", |
| "ref_id": null |
| }, |
| { |
| "start": 350, |
| "end": 352, |
| "text": "13", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Off-the-Shelf Resolvers", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "9 http://goo.gl/language/gap-coreference. 10 https://stanfordnlp.github.io/CoreNLP/ download.html.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Off-the-Shelf Resolvers", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "11 https://github.com/swiseman/nn_coref. 12 https://github.com/kentonl/e2e-coref. 13 We run Lee et al. (2017) in the final (single-model) configuration, with NLTK preprocessing (Bird and Loper, 2004) ; for Wiseman et al. 2016 68.4 59.9 0.88 64.2 Lee et al. (2017) 67.2 62.2 0.92 64.7 201768.9 51.9 0.75 63.4 containing the target pronoun also contained the correct name (TP) or the incorrect name (FP), using mention heads for alignment. We report here their performance on GAP as informative baselines, but expect retraining on Wikipedia-like texts to yield an overall improvement in performance. (This remains as future work.) Table 4 shows that all systems struggle on GAP. That is, despite modeling improvements in recent years, ambiguous pronoun resolution remains a challenge. We note particularly the large difference in performance between genders, which traditionally has not been tracked but has fairness implications for downstream tasks using these publicly available models. Table 5 provides evidence that this low performance is not solely due to domain and task differences between GAP and OntoNotes. Specifically, with the exception of Clark and Manning (2015) , the table shows that system performance on pronoun-name coreference relations in the OntoNotes test set 14 is not vastly better than GAP. performance are not very different could be that state-of-the-art systems are highly tuned for resolving names rather than ambiguous pronouns. Further, the relative performance of the four systems is different on GAP than on OntoNotes. Particularly interesting is that the current strongest system overall for OntoNotes, namely, Lee et al. (2017) , scores best on GAP pronouns but has the largest gender bias on OntoNotes. This perhaps is not surprising given the dominance of masculine examples in that corpus. It is outside the scope of this paper to provide an in-depth analysis of the data and modeling decisions that cause this bias; instead, we release GAP to address the measurement problem behind the bias. Figure 1 compares the recall/precision trade-off for each system split by Masculine and Feminine examples, as well as combined (Overall). Also shown is a simple syntactic Parallelism heuristic in which subject and direct object pronoun are resolved to names with the same grammatical role (see \u00a74.3). In this visualization, we see a further factor contributing to the low performance of off-the-shelf systems, namely, their low recall. That is, whereas personal pronouns are overwhelmingly anaphoric in both OntoNotes and Wikipedia texts, OntoNotes-trained models are conservative. This observation is consistent with the results for Lee et al. (2013) on which the system scored 47.2% F1, 15 failing to beat a random baseline due to conservativeness.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 84, |
| "text": "13", |
| "ref_id": null |
| }, |
| { |
| "start": 92, |
| "end": 109, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 177, |
| "end": 199, |
| "text": "(Bird and Loper, 2004)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 246, |
| "end": 263, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 1152, |
| "end": 1176, |
| "text": "Clark and Manning (2015)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1646, |
| "end": 1663, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 2666, |
| "end": 2683, |
| "text": "Lee et al. (2013)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 629, |
| "end": 636, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 988, |
| "end": 995, |
| "text": "Table 5", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 2032, |
| "end": 2040, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Off-the-Shelf Resolvers", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To understand the shortcomings of state-of-the-art coreference systems on GAP, the upper sections of Table 6 consider several simple baselines based on traditional cues for coreference.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 101, |
| "end": 108, |
| "text": "Table 6", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Coreference-Cue Baselines", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "To calculate these baselines, we first detect candidate antecedents by finding all mentions of PERSON entity type, NAME mention type (headed by a proper noun), and, for structural cues, that are not in a syntactic position which precludes coreference with the pronoun. We do not require gender match because gender annotations are not provided by the Google Cloud NL API and, even if they were, gender predictions on last names (without the first name) are not reliable in the snippetcontext setting. Second, we select among the candidates using one of the heuristics described next.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference-Cue Baselines", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For scoring purposes, we do not require exact string match for mention alignment-that is, if the selected candidate is a substring of a given name (or vice versa), we infer a coreference relation between that name and the target pronoun. 16 Surface Cues Baseline cues that require only access to the input text are: Table 7 : Performance of our baselines on the development set in the gold-two-mention task (access to the two candidate name spans). Parallelism+URL tests the page-context setting; all others test the snippet-context setting. Bold indicates best performance in each setting.", |
| "cite_spans": [ |
| { |
| "start": 238, |
| "end": 240, |
| "text": "16", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 316, |
| "end": 323, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Coreference-Cue Baselines", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 RANDOM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference-Cue Baselines", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 TOPICAL ENTITY. Select the closest candidate that contains the most frequent token string among extracted candidates.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference-Cue Baselines", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The performance of RANDOM (41.5 Overall) is lower than an otherwise possible guess rate of \u223c50%. This is because the baseline considers all possible candidates, not just the two annotated names. Moreover, the difference between masculine and feminine examples suggests that there are more distractor mentions in the context of feminine pronouns in GAP. To measure the impact of pronoun context, we include performance on the artificial gold-two-mention setting, where only the two name spans are candidates for inference (Table 7) . RANDOM is indeed closer here to the expected 50% and other baselines are closer to gender-parity. TOKEN DISTANCE and TOPICAL ENTITY are only weak improvements above RANDOM, validating that our dataset creation methodology controlled for these factors.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 521, |
| "end": 530, |
| "text": "(Table 7)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Coreference-Cue Baselines", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Structural Cues Baseline cues that may additionally access syntactic structure are:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference-Cue Baselines", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 SYNTACTIC DISTANCE. Select the syntactically closest candidate to the pronoun. Back off to TOKEN DISTANCE. \u2022 PARALLELISM. If the pronoun is a subject or direct object, select the closest candidate with the same grammatical argument. Back off to SYNTACTIC DISTANCE.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference-Cue Baselines", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Both cues yield strong baselines comparable to the strongest OntoNotes-trained systems (cf. Table 4 ). In fact, Lee et al. (2017) and PARALLELISM produce remarkably similar output: of the 2,000 example pairs in the development set, the two have completely opposing predictions (i.e., Name A vs. Name B) on only 325 examples. Further, the cues are markedly gender-neutral, improving the Bias metric by 9 percentage points in the standard task formulation and to parity in the gold-two-mention case. In contrast to surface cues, having the full candidate set is helpful: mention alignment via a non-indicated candidate successfully scores 69% of PARALLELISM predictions.", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 129, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 92, |
| "end": 99, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Coreference-Cue Baselines", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Wikipedia Cues To explore the page-context setting, we consider a Wikipedia-specific cue:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference-Cue Baselines", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 URL. Select the syntactically closest candidate that has a token overlap with the page title. Back off to PARALLELISM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference-Cue Baselines", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The heuristic gives a performance gain of 2% overall compared to PARALLELISM. That the feature is not more helpful again validates our methodology for extracting diverse examples. We expect future work to greatly improve on this baseline by using the wealth of cues in Wikipedia articles, including page text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference-Cue Baselines", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The recent Transformer model (Vaswani et al., 2017) demonstrated tantalizing representations for coreference: When trained for machine translation, some self-attention layers appear to show stronger attention weights between coreferential elements. 17 Voita et al. (2018) found evidence for", |
| "cite_spans": [ |
| { |
| "start": 29, |
| "end": 51, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 249, |
| "end": 251, |
| "text": "17", |
| "ref_id": null |
| }, |
| { |
| "start": 252, |
| "end": 271, |
| "text": "Voita et al. (2018)", |
| "ref_id": "BIBREF47" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transformer Models for Coreference", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "17 See Figure 4 at https://arxiv.org/abs/1706.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 7, |
| "end": 15, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Transformer Models for Coreference", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "this claim for the English pronouns it, you, and I in a movie subtitles dataset (Lison et al., 2018) . GAP allows us to explore this claim on Wikipedia for ambiguous personal pronouns. To do so, we investigate the heuristic:", |
| "cite_spans": [ |
| { |
| "start": 80, |
| "end": 100, |
| "text": "(Lison et al., 2018)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "03762.", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 TRANSFORMER. Select the candidate that attends most to the pronoun.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "03762.", |
| "sec_num": null |
| }, |
| { |
| "text": "The Transformer model underlying our experiments is trained for 350k steps on the 2014 English-German NMT task, 18 using the same settings as Vaswani et al. (2017) . The model processes texts as a series of subtokens (text fragments the size of a token or smaller) and learns three multi-head attention matrices over these, two self-attention matrices (one over the subtokens of the source sentences and one over those of the target sentences), and a cross-attention matrix between the source and target. Each attention matrix is decomposed into a series of feedforward layers, each composed of discrete heads designed to specialize for different dimensions in the training signal. We input GAP snippets as English source text and extract attention values from the source self-attention matrix; the target side (German translations) is not used. We calculate the attention between a name and pronoun to be the mean over all subtokens in these spans; the attention between two subtokens is the sum of the raw attention values between all occurrences of those subtoken strings in the input snippet. These two factors control for variation between Transformer models and the spreading of attention between different mentions of the same entity. Table 8 gives the performance of the TRANSFORMER heuristic over each self-attention head on the development dataset. Consistent with the observations by Vaswani et al. (2017) , we observe that the coreference signal is localized on specific heads and that these heads are in the deep layers of the network (e.g., L3H7). During development, we saw that the specific heads which specialize for coreference are different between different models.", |
| "cite_spans": [ |
| { |
| "start": 142, |
| "end": 163, |
| "text": "Vaswani et al. (2017)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 1395, |
| "end": 1416, |
| "text": "Vaswani et al. (2017)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1242, |
| "end": 1249, |
| "text": "Table 8", |
| "ref_id": "TABREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "03762.", |
| "sec_num": null |
| }, |
| { |
| "text": "The TRANSFORMER-SINGLE baseline in Table 6 is the one set by L3H7 in Table 8 . Despite not having access to syntactic structure, TRANSFORMER-SINGLE far outperforms all surface cues above. That is, we find evidence for the claim that Transformer models implicitly learn language understanding relevant to coreference resolution. Even more promising, we find that the instances of coreference that TRANSFORMER-SINGLE can handle is substantially different from those of PARALLELISM; see Table 9 . work could explore filtering the candidate list presented to Transformer models to reduce the impact of distractor mentions in a pronoun's contextfor example, by gender in the page-context setting. It is also worth stressing that these models are trained on very little data (the GAP validation set). These preliminary results suggest that learned models incorporating such features from the Transformer and using more data are worth exploring further. Table 10 sets the baselines for the GAP challenge. We include the off-the-shelf system that performed best Overall on the development set (Lee et al., 2017) , as well as our strongest baseline for the two task settings, PARALLELISM 20 and URL.", |
| "cite_spans": [ |
| { |
| "start": 1085, |
| "end": 1103, |
| "text": "(Lee et al., 2017)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 35, |
| "end": 42, |
| "text": "Table 6", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 69, |
| "end": 76, |
| "text": "Table 8", |
| "ref_id": "TABREF11" |
| }, |
| { |
| "start": 484, |
| "end": 491, |
| "text": "Table 9", |
| "ref_id": "TABREF13" |
| }, |
| { |
| "start": 947, |
| "end": 955, |
| "text": "Table 10", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "TRANSFORMER-SINGLE", |
| "sec_num": null |
| }, |
| { |
| "text": "We note that strict comparisons cannot be made between our snippet-context baselines given that Lee et al. (2017) has access to OntoNotes annotations that we do not, and we have access to pronoun ambiguity annotations that Lee et al. (2017) do not.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 113, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 223, |
| "end": 240, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GAP Benchmarks", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "We have shown that GAP is challenging for both off-the-shelf systems and our baselines. To assess the variance between these systems and gain a more qualitative understanding of what aspects of GAP are challenging, we use the number of off-the-shelf systems that agree with the rater-provided labels (Agreement with Gold) as a proxy for difficulty. Agreement with Gold (the smaller the agreement the harder the example). 21 Agreement with Gold is low (average 2.1) and spread. Less than 30% of the examples are successfully solved by all systems (labeled Green), and just under 15% are so challenging that none of the systems gets them right (Red). The majority are in between (Yellow). Many Green cases have syntactic cues for coreference, but we find no systematic trends within Yellow. Table 12 provides a fine-grained analysis of 75 Red cases. When labeling these cases, two important considerations emerged: (1) labels often overlap, with one example possibly fitting into multiple categories; and (2) GAP requires global reasoning-cues from different entity mentions work together to build a snippet's interpretation. The Red examples in particular exemplify the challenge of GAP, and point toward the need for multiple modeling strategies to achieve significantly higher scores on the data set.", |
| "cite_spans": [ |
| { |
| "start": 421, |
| "end": 423, |
| "text": "21", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 789, |
| "end": 797, |
| "text": "Table 12", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We have presented a data set and a set of strong baselines for a new coreference task, GAP. We designed GAP to represent the challenges posed by real-world text, in which ambiguous pronouns are important and difficult to resolve. We high- 21 Given that system predictions are not independent for the two candidate names for a given snippet, we only focus on the positive coreferential name-pronoun pair when the gold label is either \"Name A\" or \"Name B\"; we use both name-pronoun pairs when the gold label is \"Neither\". lighted gaps in the existing state of the art, and proposed the application of Transformer models to address these. Specifically, we show how traditional linguistic features and modern sentence encoder technology are complementary.", |
| "cite_spans": [ |
| { |
| "start": 239, |
| "end": 241, |
| "text": "21", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Our work contributes to the emerging body of work on the impact of bias in machine learning. We saw systematic differences between genders in analysis; this is consistent with many studies that have called out differences in how men and women are discussed publicly. By rebalancing our data set for gender, we hope to reward systems that are able to capture these complexities fairly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "It has been outside the scope of this paper to explore bias in other dimensions, to analyze coreference in other languages, and to study the impact on downstream systems of improved coreference resolution. We look forward to future work in these directions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The examples throughout the paper highlight the ambiguous pronoun in bold, the two potential coreferent names in italics, and the correct one also underlined.2 http://goo.gl/language/gap-coreference.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://cs.nyu.edu/faculty/davise/papers/ WinogradSchemas/WSCollection.xml.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In doing this, we observed that many feminine pronouns in Wikipedia refer to characters in film and television.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For example, missing sentence breaks, list environments, and non-referential personal roles/nationalities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For each gendered pronoun in a gold OntoNotes cluster, we compare the system cluster with that pronoun. We count a TP if the system entity contains at least one gold coreferent NE mention; FP if the system entity contains at least one non-gold NE mention, and FN if the system entity does not contain any gold NE mention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Calculated based on the reported performance of 40.07% Correct, 29.79% Incorrect, and 30.14% No decision.16 Note that requiring exact string match drops recall and causes only a small difference in F1 performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.statmt.org/wmt14/translationtask.html.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://scikit-learn.org/stable/modules/ generated/sklearn.ensemble.ExtraTreesClas sifier.html.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We also trained an Extra Tree classifier over all explored coreference-cue baselines (including Transformerbased heuristics), but its performance was similar to PARAL-LELISM and the predictions matched in the vast majority of instances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank our anonymous reviewers and the Google AI Language team, especially Emily Pitler, for the insightful comments that contributed to this paper. Many thanks also to the Data Compute team, especially Ashwin Kakarla, Henry Jicha, and Daphne Luong, for their help with the annotations, and thanks to Llion Jones for his help with the Transformer experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Unsupervised discovery of biographical structure from text", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Bamman", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Transactions of the ACL", |
| "volume": "2", |
| "issue": "", |
| "pages": "363--376", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Bamman and Noah A. Smith. 2014. Un- supervised discovery of biographical structure from text. Transactions of the ACL, 2:363-376.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Coreference semantics from web features", |
| "authors": [ |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "389--398", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohit Bansal and Dan Klein. 2012. Coreference semantics from web features. In Proceedings of ACL, pages 389-398, Jeju Island, Korea.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Unsupervised learning of contextual role knowledge for coreference resolution", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Bean", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "297--304", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Bean and Ellen Riloff. 2004. Unsuper- vised learning of contextual role knowledge for coreference resolution. In Proceedings of HLT-NAACL, pages 297-304, Boston, Massachusetts.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Bootstrapping path-based pronoun resolution", |
| "authors": [ |
| { |
| "first": "Shane", |
| "middle": [], |
| "last": "Bergsma", |
| "suffix": "" |
| }, |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "33--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shane Bergsma and Dekang Lin. 2006. Boot- strapping path-based pronoun resolution. In Proceedings of ACL, pages 33-40, Sydney, Australia.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "NLTK: The natural language toolkit", |
| "authors": [ |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bird", |
| "suffix": "" |
| }, |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Loper", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the ACL 2004 on Interactive Poster and Demon- stration Sessions, Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Man is to computer programmer as woman is to homemaker? Debiasing word embeddings", |
| "authors": [ |
| { |
| "first": "Tolga", |
| "middle": [], |
| "last": "Bolukbasi", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Zou", |
| "suffix": "" |
| }, |
| { |
| "first": "Venkatesh", |
| "middle": [], |
| "last": "Saligrama", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Kalai", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "4349--4357", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Proceedings of NIPS, pages 4349-4357, Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A systematic study of the class imbalance problem in convolutional neural networks", |
| "authors": [ |
| { |
| "first": "Mateusz", |
| "middle": [], |
| "last": "Buda", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mateusz Buda. 2017. A systematic study of the class imbalance problem in convolutional neu- ral networks. Master's thesis, KTH Royal Insti- tute of Technology.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Semantics derived automatically from language corpora contain human-like biases", |
| "authors": [ |
| { |
| "first": "Aylin", |
| "middle": [], |
| "last": "Caliskan", |
| "suffix": "" |
| }, |
| { |
| "first": "Joanna", |
| "middle": [ |
| "J" |
| ], |
| "last": "Bryson", |
| "suffix": "" |
| }, |
| { |
| "first": "Arvind", |
| "middle": [], |
| "last": "Narayanan", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Science", |
| "volume": "356", |
| "issue": "6334", |
| "pages": "183--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automati- cally from language corpora contain human-like biases. Science, 356(6334):183-186.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Unsupervised learning of narrative event chains", |
| "authors": [ |
| { |
| "first": "Nathanael", |
| "middle": [], |
| "last": "Chambers", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL: HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "789--797", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nathanael Chambers and Dan Jurafsky. 2008. Un- supervised learning of narrative event chains. In Proceedings of ACL: HLT, pages 789-797, Columbus, Ohio.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Entity-centric coreference resolution with model stacking", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1405--1415", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In Proceedings of ACL, pages 1405-1415, Beijing, China.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Automatic processing of large corpora for the resolution of anaphora references", |
| "authors": [ |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Itai", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Proceedings of COL-ING", |
| "volume": "", |
| "issue": "", |
| "pages": "330--332", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ido Dagan and Alon Itai. 1990. Automatic pro- cessing of large corpora for the resolution of anaphora references. In Proceedings of COL- ING, pages 330-332, Helsinki, Finland.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Easy victories and uphill battles in coreference resolution", |
| "authors": [ |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1971--1982", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Greg Durrett and Dan Klein. 2013. Easy victo- ries and uphill battles in coreference resolution. In Proceedings of EMNLP, pages 1971-1982, Seattle, Washington.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A joint model for entity analysis: Coreference, typing, and linking", |
| "authors": [ |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Transactions of the ACL", |
| "volume": "2", |
| "issue": "", |
| "pages": "477--490", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. Transactions of the ACL, 2:477-490.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Fairness through awareness", |
| "authors": [ |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Dwork", |
| "suffix": "" |
| }, |
| { |
| "first": "Moritz", |
| "middle": [], |
| "last": "Hardt", |
| "suffix": "" |
| }, |
| { |
| "first": "Toniann", |
| "middle": [], |
| "last": "Pitassi", |
| "suffix": "" |
| }, |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Reingold", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [ |
| "S" |
| ], |
| "last": "Zemel", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of ITCS", |
| "volume": "", |
| "issue": "", |
| "pages": "214--226", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard S. Zemel. 2012. Fairness through awareness. In Proceedings of ITCS, pages 214-226.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The Measurement of Interrater Agreement", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [ |
| "L" |
| ], |
| "last": "Fleiss", |
| "suffix": "" |
| }, |
| { |
| "first": "Bruce", |
| "middle": [], |
| "last": "Levin", |
| "suffix": "" |
| }, |
| { |
| "first": "Myunghee Cho", |
| "middle": [], |
| "last": "Paik", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph L. Fleiss, Bruce Levin, and Myunghee Cho Paik. 2003. The Measurement of Interrater Agreement, 3 edition. John Wiley and Sons Inc.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Extremely randomized trees. Machine Learning", |
| "authors": [ |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Geurts", |
| "suffix": "" |
| }, |
| { |
| "first": "Damien", |
| "middle": [], |
| "last": "Ernst", |
| "suffix": "" |
| }, |
| { |
| "first": "Louis", |
| "middle": [], |
| "last": "Wehenkel", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "63", |
| "issue": "", |
| "pages": "3--42", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pierre Geurts, Damien Ernst, and Louis Wehenkel. 2006. Extremely randomized trees. Machine Learning, 63(1):3-42.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Coreference in Wikipedia: Main Concept Resolution", |
| "authors": [ |
| { |
| "first": "Abbas", |
| "middle": [], |
| "last": "Ghaddar", |
| "suffix": "" |
| }, |
| { |
| "first": "Philippe", |
| "middle": [], |
| "last": "Langlais", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "229--238", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abbas Ghaddar and Philippe Langlais. 2016a. Coreference in Wikipedia: Main Concept Resolution. In Proceedings of CoNLL, pages 229-238, Berlin, Germany.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "WikiCoref: An English coreference-annotated corpus of Wikipedia articles", |
| "authors": [ |
| { |
| "first": "Abbas", |
| "middle": [], |
| "last": "Ghaddar", |
| "suffix": "" |
| }, |
| { |
| "first": "Philippe", |
| "middle": [], |
| "last": "Langlais", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abbas Ghaddar and Philippe Langlais. 2016b. WikiCoref: An English coreference-annotated corpus of Wikipedia articles. In Proceedings of LREC, Portoro\u017e, Slovenia.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "First women, second sex: Gender bias in Wikipedia", |
| "authors": [ |
| { |
| "first": "Eduardo", |
| "middle": [], |
| "last": "Graells-Garrido", |
| "suffix": "" |
| }, |
| { |
| "first": "Mounia", |
| "middle": [], |
| "last": "Lalmas", |
| "suffix": "" |
| }, |
| { |
| "first": "Filippo", |
| "middle": [], |
| "last": "Menczer", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 26th ACM Conference on Social Media", |
| "volume": "", |
| "issue": "", |
| "pages": "165--174", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eduardo Graells-Garrido, Mounia Lalmas, and Filippo Menczer. 2015. First women, second sex: Gender bias in Wikipedia. In Proceedings of the 26th ACM Conference on Social Media, pages 165-174, Guzelyurt, Northern Cyprus.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Message Understanding Conference 6: A brief history", |
| "authors": [ |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| }, |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Sundheim", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "466--471", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ralph Grishman and Beth Sundheim. 1996. Mes- sage Understanding Conference 6: A brief history. In Proceedings of COLING, pages 466-471, Copenhagen, Denmark.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Improving pronoun translation for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Liane", |
| "middle": [], |
| "last": "Guillou", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Student Research Workshop at the 13th Conference of the EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liane Guillou. 2012. Improving pronoun transla- tion for statistical machine translation. In Pro- ceedings of the Student Research Workshop at the 13th Conference of the EACL, pages 1-10, Avignon, France.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "How big data is unfair: Understanding unintended sources of unfairness in data driven decision making", |
| "authors": [ |
| { |
| "first": "Moritz", |
| "middle": [], |
| "last": "Hardt", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Moritz Hardt. 2014. How big data is unfair: Understanding unintended sources of un- fairness in data driven decision making. https://medium.com/@mrtz/how-big- data-is-unfair-9aa544d739de.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Equality of opportunity in supervised learning", |
| "authors": [ |
| { |
| "first": "Moritz", |
| "middle": [], |
| "last": "Hardt", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Price", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Srebro", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "3323--3331", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of opportunity in supervised learn- ing. In Proceedings of NIPS, pages 3323-3331, Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "The (non)utility of predicate-argument frequencies for pronoun interpretation", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Kehler", |
| "suffix": "" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Appelt", |
| "suffix": "" |
| }, |
| { |
| "first": "Lara", |
| "middle": [], |
| "last": "Taylor", |
| "suffix": "" |
| }, |
| { |
| "first": "Aleksandr", |
| "middle": [], |
| "last": "Simma", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "289--296", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Kehler, Douglas Appelt, Lara Taylor, and Aleksandr Simma. 2004. The (non)utility of predicate-argument frequencies for pronoun interpretation. In Proceedings of HLT-NAACL, pages 289-296, Boston, Massachusetts.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Deterministic coreference resolution based on entity-centric, precision-ranked rules", |
| "authors": [ |
| { |
| "first": "Heeyoung", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Angel", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yves", |
| "middle": [], |
| "last": "Peirsman", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathanael", |
| "middle": [], |
| "last": "Chambers", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Computational Linguistics", |
| "volume": "39", |
| "issue": "4", |
| "pages": "885--916", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference reso- lution based on entity-centric, precision-ranked rules. Computational Linguistics, 39(4):885-916.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "End-to-end neural coreference resolution", |
| "authors": [ |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Luheng", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "188--197", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coref- erence resolution. In Proceedings of EMNLP, pages 188-197, Copenhagen, Denmark.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "The Winograd Schema Challenge", |
| "authors": [ |
| { |
| "first": "Hector", |
| "middle": [], |
| "last": "Levesque", |
| "suffix": "" |
| }, |
| { |
| "first": "Ernest", |
| "middle": [], |
| "last": "Davis", |
| "suffix": "" |
| }, |
| { |
| "first": "Leora", |
| "middle": [], |
| "last": "Morgenstern", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of KR", |
| "volume": "", |
| "issue": "", |
| "pages": "552--561", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The Winograd Schema Chal- lenge. In Proceedings of KR, pages 552-561, Rome, Italy.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Large corpus-based semantic feature extraction for pronoun coreference", |
| "authors": [ |
| { |
| "first": "Shasha", |
| "middle": [], |
| "last": "Liao", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2nd Workshop on NLP Challenges in the Information Explosion Era (NLPIX)", |
| "volume": "", |
| "issue": "", |
| "pages": "60--68", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shasha Liao and Ralph Grishman. 2010. Large corpus-based semantic feature extraction for pronoun coreference. In Proceedings of the 2nd Workshop on NLP Challenges in the Infor- mation Explosion Era (NLPIX), pages 60-68, Beijing, August.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "OpenSubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora", |
| "authors": [ |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Lison", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00f6rg", |
| "middle": [], |
| "last": "Tiedemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Milen", |
| "middle": [], |
| "last": "Kouylekov", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "1742--1748", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pierre Lison, J\u00f6rg Tiedemann, and Milen Kouylekov. 2018. OpenSubtitles2018: Sta- tistical rescoring of sentence alignments in large, noisy parallel corpora. In Proceedings of LREC, pages 1742-1748, Miyazaki, Japan.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Combing context and commonsense knowledge through neural networks for solving Winograd schema problems", |
| "authors": [ |
| { |
| "first": "Quan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhen-Hua", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodan", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Si", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "315--321", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Quan Liu, Hui Jiang, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, and Yu Hu. 2017. Comb- ing context and commonsense knowledge through neural networks for solving Winograd schema problems. In Proceedings of AAAI, pages 315-321, San Francisco, California.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "The Stanford CoreNLP Natural Language Processing Toolkit", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jenny", |
| "middle": [], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [ |
| "J" |
| ], |
| "last": "Bethard", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mcclosky", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of ACL System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "55--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Pro- ceedings of ACL System Demonstrations, pages 55-60, Baltimore, Maryland.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Planning, executing, and evaluating the Winograd Schema Challenge", |
| "authors": [ |
| { |
| "first": "Leora", |
| "middle": [], |
| "last": "Morgenstern", |
| "suffix": "" |
| }, |
| { |
| "first": "Ernest", |
| "middle": [], |
| "last": "Davis", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [ |
| "L" |
| ], |
| "last": "Ortiz", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "AI Magazine", |
| "volume": "37", |
| "issue": "1", |
| "pages": "50--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leora Morgenstern, Ernest Davis, and Charles L. Ortiz. 2016. Planning, executing, and evaluat- ing the Winograd Schema Challenge. AI Mag- azine, 37(1):50-54.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Wikipedia mining for triple extraction enhanced by co-reference resolution", |
| "authors": [ |
| { |
| "first": "Kotaro", |
| "middle": [], |
| "last": "Nakayama", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the ISWC Workshop on Social Data on the Web", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kotaro Nakayama. 2008. Wikipedia mining for triple extraction enhanced by co-reference res- olution. In Proceedings of the ISWC Workshop on Social Data on the Web, Berlin, Germany.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Discrimination-aware data mining", |
| "authors": [ |
| { |
| "first": "Dino", |
| "middle": [], |
| "last": "Pedreshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Salvatore", |
| "middle": [], |
| "last": "Ruggieri", |
| "suffix": "" |
| }, |
| { |
| "first": "Franco", |
| "middle": [], |
| "last": "Turini", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of KDD", |
| "volume": "", |
| "issue": "", |
| "pages": "560--568", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. 2008. Discrimination-aware data min- ing. In Proceedings of KDD, pages 560-568, Las Vegas, Nevada.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Solving hard coreference problems", |
| "authors": [ |
| { |
| "first": "Haoruo", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Khashabi", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "809--819", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Haoruo Peng, Daniel Khashabi, and Dan Roth. 2015. Solving hard coreference problems. In Proceedings of NAACL, pages 809-819, Denver, Colorado.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution", |
| "authors": [ |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Simone", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Ponzetto", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Strube", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "192--199", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution. In Proceedings of HLT-NAACL, pages 192-199, New York, New York.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "CoNLL-2012 Shared Task: Modeling multilingual unrestricted coreference in OntoNotes", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sameer Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuchen", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of CoNLL: Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "1--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 Shared Task: Modeling multilin- gual unrestricted coreference in OntoNotes. In Proceedings of CoNLL: Shared Task, pages 1-40, Jeju, Republic of Korea.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Unrestricted coreference: Identifying entities and events in OntoNotes", |
| "authors": [ |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Lance", |
| "middle": [], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Jessica", |
| "middle": [], |
| "last": "Macbride", |
| "suffix": "" |
| }, |
| { |
| "first": "Linnea", |
| "middle": [], |
| "last": "Micciulla", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ICSC", |
| "volume": "", |
| "issue": "", |
| "pages": "446--453", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Lance Ramshaw, Ralph Weischedel, Jessica MacBride, and Linnea Micciulla. 2007. Unrestricted coreference: Identifying entities and events in OntoNotes. In Proceedings of ICSC, pages 446-453, Irvine, California.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Resolving complex cases of definite pronouns: The Winograd Schema Challenge", |
| "authors": [ |
| { |
| "first": "Altaf", |
| "middle": [], |
| "last": "Rahman", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "777--789", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Altaf Rahman and Vincent Ng. 2012. Resolv- ing complex cases of definite pronouns: The Winograd Schema Challenge. In Proceed- ings of EMNLP-CoNLL, pages 777-789, Jeju, Republic of Korea.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Gender bias in coreference resolution", |
| "authors": [ |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Rudinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Naradowsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Leonard", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Pro- ceedings of NAACL, New Orleans, Louisiana.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Improving smiling detection with race and gender diversity", |
| "authors": [ |
| { |
| "first": "Hee", |
| "middle": [ |
| "Jung" |
| ], |
| "last": "Ryu", |
| "suffix": "" |
| }, |
| { |
| "first": "Margaret", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "Hartwig", |
| "middle": [], |
| "last": "Adam", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hee Jung Ryu, Margaret Mitchell, and Hartwig Adam. 2017. Improving smiling detection with race and gender diversity. ArXiv e-prints v2 https://arxiv.org/ abs/1712.00193.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "No classification without representation: Assessing geodiversity issues in open data sets for the developing world", |
| "authors": [ |
| { |
| "first": "Shreya", |
| "middle": [], |
| "last": "Shankar", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoni", |
| "middle": [], |
| "last": "Halpern", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Breck", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Atwood", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimbo", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Sculley", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the NIPS Workshop on Machine Learning for the Developing World", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D. Sculley. 2017. No classification without representation: As- sessing geodiversity issues in open data sets for the developing world. In Proceedings of the NIPS Workshop on Machine Learning for the Developing World, Long Beach, California.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "The authority of \"fair\" in machine learning", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Skirpan", |
| "suffix": "" |
| }, |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Gorelick", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the KDD Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Skirpan and Micha Gorelick. 2017. The authority of \"fair\" in machine learning. In Pro- ceedings of the KDD Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML), Halifax, Nova Scotia.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Conundrums in noun phrase coreference resolution: Making sense of the state-of-the-art", |
| "authors": [ |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Gilbert", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of ACL-IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "656--664", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Veselin Stoyanov, Nathan Gilbert, Claire Cardie, and Ellen Riloff. 2009. Conundrums in noun phrase coreference resolution: Making sense of the state-of-the-art. In Proceedings of ACL- IJCNLP, pages 656-664, Singapore.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Unbiased look at dataset bias", |
| "authors": [ |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Torralba", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexei", |
| "middle": [ |
| "A" |
| ], |
| "last": "Efros", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of CVPR 2011", |
| "volume": "", |
| "issue": "", |
| "pages": "1521--1528", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Antonio Torralba and Alexei A. Efros. 2011. Un- biased look at dataset bias. In Proceedings of CVPR 2011, pages 1521-1528, Colorado Springs, Colorado.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "A simple method for commonsense reasoning", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Trieu", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [ |
| "V" |
| ], |
| "last": "Trinh", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Trieu H. Trinh and Quoc V. Le. 2018. A simple method for commonsense reasoning. ArXiv e-prints v1 https:/arxiv.org/ abs/1806.02847.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "Lukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "6000--6010", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Proceedings of NIPS, pages 6000-6010, Long Beach, California.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Context-aware neural machine translation learns anaphora resolution", |
| "authors": [ |
| { |
| "first": "Elena", |
| "middle": [], |
| "last": "Voita", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Serdyukov", |
| "suffix": "" |
| }, |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural ma- chine translation learns anaphora resolution. In Proceedings of ACL, Melbourne, Australia.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Women through the glass ceiling: Gender assymetrics in Wikipedia", |
| "authors": [ |
| { |
| "first": "Claudia", |
| "middle": [], |
| "last": "Wagner", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduardo", |
| "middle": [], |
| "last": "Graells-Garrido", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Garcia", |
| "suffix": "" |
| }, |
| { |
| "first": "Filippo", |
| "middle": [], |
| "last": "Menczer", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "EPJ Data Science", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Claudia Wagner, Eduardo Graells-Garrido, David Garcia, and Filippo Menczer. 2016. Women through the glass ceiling: Gender assymetrics in Wikipedia. EPJ Data Science, 5.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Learning global features for coreference resolution", |
| "authors": [ |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Wiseman", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "M" |
| ], |
| "last": "Rush", |
| "suffix": "" |
| }, |
| { |
| "first": "Stuart", |
| "middle": [ |
| "M" |
| ], |
| "last": "Shieber", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "994--1004", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coreference resolution. In Proceedings of NAACL-HLT, pages 994-1004, San Diego, California.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Improving pronoun resolution using statisticsbased semantic compatibility information", |
| "authors": [ |
| { |
| "first": "Xiaofeng", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Chew Lim", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "165--172", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaofeng Yang, Jian Su, and Chew Lim Tan. 2005. Improving pronoun resolution using statistics- based semantic compatibility information. In Proceedings of ACL, pages 165-172, Ann Arbor, Michigan.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gummadi", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of WWW", |
| "volume": "", |
| "issue": "", |
| "pages": "1171--1180", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning clas- sification without disparate mistreatment. In Proceedings of WWW, pages 1171-1180, Perth, Australia.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "Men also like shopping: Reducing gender bias amplification using corpus-level constraints", |
| "authors": [ |
| { |
| "first": "Jieyu", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Tianlu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Yatskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Vicente", |
| "middle": [], |
| "last": "Ordonez", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "2941--2951", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias am- plification using corpus-level constraints. In Proceedings of EMNLP, pages 2941-2951, Copenhagen, Denmark.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "Gender bias in coreference resolution: Evaluation and debiasing methods", |
| "authors": [ |
| { |
| "first": "Jieyu", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Tianlu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Yatskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Vicente", |
| "middle": [], |
| "last": "Ordonez", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of NAACL, New Orleans, Louisiana.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "we use Berkeley preprocessing(Durrett and Klein, 2014); and the Stanford systems are run within Stanford CoreNLP(Manning et al., 2014).", |
| "num": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "One possible reason that in-domain OntoNotes performance and out-of-domain GAP Precision-Recall on the GAP development data set-Overall (solid markers), Masculine, Feminine-for off-the-shelf resolvers and Parallelism.", |
| "num": null |
| }, |
| "TABREF0": { |
| "content": "<table><tr><td>Type</td><td>Pattern</td><td>Example</td></tr><tr><td>FINALPRO</td><td>(Name, Name, Pronoun)</td><td/></tr></table>", |
| "text": "Preckwinkle criticizes Berrios' nepotism: [. . . ] County's ethics rules don't apply to him. MEDIALPRO (Name, Pronoun, Name) McFerran's horse farm was named Glen View. After his death in 1885, John E. Green acquired the farm. INITIALPRO (Pronoun, Name, Name) Judging that he is suitable to join the team, Butcher injects", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF1": { |
| "content": "<table><tr><td>Dimension</td><td>Values</td><td>Ratio</td></tr><tr><td>Page coverage</td><td/><td>1 per page per</td></tr><tr><td/><td/><td>pronoun form</td></tr><tr><td>Gender</td><td>masc. : fem.</td><td>1 : 1</td></tr><tr><td colspan=\"2\">Extraction Pattern final : medial : initial</td><td>6.2 : 1 : 1</td></tr><tr><td>Page Entity</td><td>true : false</td><td>1.3 : 1</td></tr><tr><td>Coreferent Name</td><td>nameA : nameB</td><td>1 : 1</td></tr></table>", |
| "text": "Extraction patterns and example contexts for each.", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF2": { |
| "content": "<table/>", |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF4": { |
| "content": "<table/>", |
| "text": "Consensus label counts for the extracted examples (Raw) and after further filtering (Final).", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF5": { |
| "content": "<table><tr><td colspan=\"5\">Feminine (Bias shows F/M), and Overall. Bold indi-</td></tr><tr><td>cates best performance.</td><td/><td/><td/></tr><tr><td/><td>M</td><td>F</td><td>B</td><td>O</td></tr><tr><td>Lee et al. (2013)</td><td colspan=\"4\">47.7 53.2 1.12 49.2</td></tr><tr><td colspan=\"5\">Clark and Manning 64.3 63.9 0.99 64.2</td></tr><tr><td>Wiseman et al.</td><td colspan=\"4\">61.9 58.0 0.94 60.6</td></tr><tr><td>Lee et al.</td><td/><td/><td/></tr></table>", |
| "text": "Performance of off-the-shelf resolvers on the GAP development set, split by Masculine and", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF6": { |
| "content": "<table/>", |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF8": { |
| "content": "<table/>", |
| "text": "Performance of our baselines on the development set. Parallelism+URL tests the page-context setting; all other test the snippet-context setting. Bold indicates best performance in each setting.", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF11": { |
| "content": "<table><tr><td>: Coreference signal of a Transformer model on</td></tr><tr><td>the validation dataset, by encoder attention layer and</td></tr><tr><td>head.</td></tr></table>", |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF13": { |
| "content": "<table><tr><td/><td>M</td><td>F</td><td>B</td><td>O</td></tr><tr><td>Lee et al. (2017)</td><td colspan=\"4\">67.7 60.0 0.89 64.0</td></tr><tr><td>Parallelism</td><td colspan=\"4\">69.4 64.4 0.93 66.9</td></tr><tr><td colspan=\"5\">Parallelism+URL 72.3 68.8 0.95 70.6</td></tr></table>", |
| "text": "Comparison of the predictions of the PARAL-LELISM and TRANSFORMER-SINGLE heuristics over the GAP development dataset.", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF14": { |
| "content": "<table/>", |
| "text": "Baselines on the GAP challenge test set.", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF15": { |
| "content": "<table/>", |
| "text": "Analysis of the GAP development examples by the number of systems (out of 4) agreeing with gold.", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF16": { |
| "content": "<table><tr><td>Category</td><td>Description</td><td>Example (abridged)</td><td>#</td></tr><tr><td>NARRATIVE</td><td>Inference involving the roles</td><td/><td>28</td></tr><tr><td>ROLES</td><td>people take in described events</td><td/><td/></tr><tr><td>COMPLEX</td><td>Syntactic cues are present</td><td/><td>20</td></tr><tr><td>SYNTAX</td><td>but in complex constructions</td><td/><td/></tr><tr><td colspan=\"2\">TOPICALITY Inference involving the</td><td>The disease is named after Eduard Heinrich Henoch (1820-</td><td>15</td></tr><tr><td/><td>entity topicality, inc. paren-</td><td>1910), a German pediatrician (nephew of Moritz Heinrich</td><td/></tr><tr><td/><td>theticals</td><td>Romberg) and his teacher</td><td/></tr><tr><td>DOMAIN KNOWLEDGE</td><td>Inference involving knowl-edge specific to a domain,</td><td colspan=\"2\">The half finished 4-0, after Hampton converted a penalty awarded against 6</td></tr><tr><td/><td>e.g. sport</td><td/><td/></tr><tr><td>ERROR</td><td>Annotation error, inc. truly</td><td/><td/></tr><tr><td/><td>ambiguous cases</td><td/><td/></tr></table>", |
| "text": "breaks down the name-pronoun examples in the development set byAs Nancy tried to pull Hind down by the arm in the final meters as what was clearly an attempt to drop her[...] Sheena thought back to the 1980s[...] and thought of idol Hiroko Mita, who had appeared on many posters for medical products, acting as if her stomach or head hurt Arthur Knight for handball when Fleming's powerful shot struck his arm.When she gets into an altercation with Queenie, Fiona makes her act as Queenie's slave[...] 6 Fine-grained categorization of 75 Red examples from the GAP development set (no system agreed with the worker-selected name). Underlining indicates the rater-selected name.", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |