| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:21:21.987081Z" |
| }, |
| "title": "Enhanced Labelling in Active Learning for Coreference Resolution", |
| "authors": [ |
| { |
| "first": "Vebj\u00f8rn", |
| "middle": [], |
| "last": "Espeland", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Edinburgh", |
| "location": { |
| "addrLine": "Opus 2 International" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Bach", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Edinburgh", |
| "location": {} |
| }, |
| "email": "bbach@inf.ed.ac.uk" |
| }, |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Alex", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Edinburgh", |
| "location": {} |
| }, |
| "email": "balex@ed.ac.uk" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper we describe our attempt to increase the amount of information that can be retrieved through active learning sessions compared to previous approaches. We optimise the annotator's labelling process using active learning in the context of coreference resolution. Using simulated active learning experiments, we suggest three adjustments to ensure the labelling time is spent as efficiently as possible. All three adjustments provide more information to the machine learner than the baseline, though a large impact on the F1 score over time is not observed. Compared to previous models, we report a marginal F1 improvement on the final coreference models trained using for two out of the three approaches tested when applied to the English OntoNotes 2012 Coreference Resolution data. Our best-performing model achieves 58.01 F1, an increase of 0.93 F1 over the baseline model.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper we describe our attempt to increase the amount of information that can be retrieved through active learning sessions compared to previous approaches. We optimise the annotator's labelling process using active learning in the context of coreference resolution. Using simulated active learning experiments, we suggest three adjustments to ensure the labelling time is spent as efficiently as possible. All three adjustments provide more information to the machine learner than the baseline, though a large impact on the F1 score over time is not observed. Compared to previous models, we report a marginal F1 improvement on the final coreference models trained using for two out of the three approaches tested when applied to the English OntoNotes 2012 Coreference Resolution data. Our best-performing model achieves 58.01 F1, an increase of 0.93 F1 over the baseline model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Coreference resolution (CR) is the task of resolving which noun phrases (NP) in a text are referring to the same entity. It is related to entity linking, but does not involve an external knowledge base. It is an important task in information extraction, as a step in structuring the unstructured information in natural language. CR has traditionally been a difficult problem, as it is hard to accurately predict coreference links without extensive real-world knowledge. Figure 1 : Different types of coreference resolution. An anaphoric pair of noun phrases is marked in green, and a cataphoric pair is marked in yellow. From \"T2: Trainspotting\" (Boyle, 2017) An example of different levels of CR is shown in Figure 1 . The mentions \"us\" and \"I\" are both singletons, and are not coreferring with anything in this text. The noun phrase \"she\" is anaphoric (where the pronoun points backwards to its antecedent) with \"the Queen\". The pronoun \"You\" in \"You've\" is coreferring with \"Mr Begbie\", but the pronoun is pointing forward to its coreferent, this type of coreference is cataphoric coreference.", |
| "cite_spans": [ |
| { |
| "start": 646, |
| "end": 659, |
| "text": "(Boyle, 2017)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 470, |
| "end": 478, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 709, |
| "end": 717, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Many of the most successful coreference resolution approaches have used hand-crafted corpora, such as ACE (NIST, 2004) , GAP (Webster et al., 2018) and OntoNotes (Pradhan et al., 2012) . Models trained using these datasets, though comparatively successful, do not necessarily generalise to domain specific data, or noisy data. Making these big datasets is also a very expensive task, which is very difficult for low resource languages.", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 118, |
| "text": "(NIST, 2004)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 125, |
| "end": 147, |
| "text": "(Webster et al., 2018)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 152, |
| "end": 184, |
| "text": "OntoNotes (Pradhan et al., 2012)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Active learning is a human-in-the-loop approach to machine learning, where a sample selection algorithm chooses the most informative samples for a human to annotate. This approach will reduce the total amount of samples which need to be labelled to achieve high accuracy, and in some cases it accelerates the otherwise expensive process of hand-crafting fully labelled datasets. Iteratively training and labelling this way would lead to higher accuracy models faster than training with random sampling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The most expensive part of dataset creation is the labelling effort of the annotators. Therefore using the annotator's time as efficiently as possible should be a key focus in developing active learning techniques. As previous research (Section 2.2) has focused on which samples to label, this article will focus on improving the use of the annotator's time. The objective of this research is to improve the amount of information that can be retrieved through the active learning sessions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Aiming to use the annotator's time as efficiently as possible, this article suggests three improvements to recent developments in active learning for coreference resolution. We investigate whether it is effective to label all the instances of an entity once the user has been asked to provide the first label of the entity. We also suggest an improvement based on allowing the user to edit an incorrectly identified mention and then provide coreference information, rather than disregarding that candidate coreferent pair. Finally, for mentions which are the first instances of their entity, such as the example of \"Mr Begbie\" above, we allow the user to provide cataphoric labels. We use the English OntoNotes 2012 Coreference Resolution dataset provided by the CoNLL 2012 shared task (Pradhan et al., 2012) to simulate dataset creation using active learning techniques.", |
| "cite_spans": [ |
| { |
| "start": 786, |
| "end": 808, |
| "text": "(Pradhan et al., 2012)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we firstly review the related work on coreference resolution and active learning in Section 2. Then in Section 3 and 4 we explain the experimental methodology and review the results. Finally in Section 5 and 6 we analyse the results before our conclusions and directions for future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A detailed review of the early research in coreference resolution was made by Ng (2010) . I will summarise this in short in this section, and move on to reviewing the later research, especially the approaches using deep learning.", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 87, |
| "text": "Ng (2010)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference resolution", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Past coreference resolution research can be divided into two approaches: mention-pair and mentionranking. The mention-pair models attempt to reduce the coreference resolution challenge to a binary problem, whether two NPs are coreferring or not. Aone and Bennett (1995) and McCarthy and Lehnert (1995) were early proponents of this method. The mention-ranking models aim to rank the candidate antecedent mentions according to likelihood of coreferring. Connolly et al. (1997) were the first to apply this approach. Other mention ranking approaches include Iida et al. (2003) , Yang et al. (2003) , and Yang et al. (2008) . Durrett and Klein (2013) tried to reduce the amount of expensive hand-crafted features. This idea was picked up by Wiseman et al. (2015) . The benefit of using neural networks is that the fine-tuning of these features is left in the hidden layers of the network. With the arrival of word-embedding techniques after the very influential paper by Mikolov et al. (2013) , much of the research in natural language processing (NLP), including coreference resolution, took a step in the direction of using neural networks.", |
| "cite_spans": [ |
| { |
| "start": 246, |
| "end": 269, |
| "text": "Aone and Bennett (1995)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 274, |
| "end": 301, |
| "text": "McCarthy and Lehnert (1995)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 453, |
| "end": 475, |
| "text": "Connolly et al. (1997)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 556, |
| "end": 574, |
| "text": "Iida et al. (2003)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 577, |
| "end": 595, |
| "text": "Yang et al. (2003)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 598, |
| "end": 620, |
| "text": "and Yang et al. (2008)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 623, |
| "end": 647, |
| "text": "Durrett and Klein (2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 738, |
| "end": 759, |
| "text": "Wiseman et al. (2015)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 968, |
| "end": 989, |
| "text": "Mikolov et al. (2013)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference resolution", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Clark and Manning (2016a) used a deep neural network to capture a larger set of learned, continuous features indicating that more entity-level information is beneficial to the coreference task. Based on this finding, they trained a neural mention-ranking model using reinforcement learning (Clark and Manning, 2016b) . They claimed that, despite being less expressive than the entity-centric models of Haghighi and Klein (2010; Clark and Manning (2015) , their model is faster, more scalable and simpler to train. Lee et al. (2017) presented a neural end-to-end coreference resolution system, without using a syntactic parser or a mention detector to extract the candidate mentions. They combined context-dependent boundary representations with an attention mechanism for NP head finding, inspired by Durrett and Klein (2013) to treat aggregated spans of words as a unit. The likelihood of two spans being coreferent is determined by merging the likelihood of either span being a mention with the likelihood of them coreferring.", |
| "cite_spans": [ |
| { |
| "start": 290, |
| "end": 316, |
| "text": "(Clark and Manning, 2016b)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 402, |
| "end": 427, |
| "text": "Haghighi and Klein (2010;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 428, |
| "end": 452, |
| "text": "Clark and Manning (2015)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 514, |
| "end": 531, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 801, |
| "end": 825, |
| "text": "Durrett and Klein (2013)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference resolution", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Finally, with the arrival of transformers and BERT (Devlin et al., 2018) , the field of NLP took another leap forward. Coreference resolution approaches using BERT include Joshi et al. (2019) and Joshi et al. (2020) .", |
| "cite_spans": [ |
| { |
| "start": 51, |
| "end": 72, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 172, |
| "end": 191, |
| "text": "Joshi et al. (2019)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 196, |
| "end": 215, |
| "text": "Joshi et al. (2020)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference resolution", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "When building a dataset for NLP tasks, a human annotator would normally have to label every single sample in the dataset which is a very expensive process. The use of active learning is an appealing solution to creating and labelling datasets, as the human annotator would only have to annotate the most informative samples. There are two main considerations in the active learning process outside of user interface design: how to choose which samples to label, and how to label them. The first consideration has been the most researched, the second is the focus of this article.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "There is an array of techniques to choose which samples to label next. Using an informativeness measure such as entropy enables an algorithm to choose the samples with the highest uncertainty. Lewis and Gale (1994) , Gasperin (2009) and Schein and Ungar (2007) use this technique with varying degrees of success. Other methods include ensemble models like query-by-committee (QBC) and cluster-outlier methods. Sachan et al. (2015) reviewed these and found that all these methods performed better than random sampling, and that the ensemble model is the best performing one. Settles (2009) reviewed general active learning literature, and Olsson (2009) reviewed the AL literature within the scope of NLP. Recently, Shen et al. (2017) used active learning for named entity recognition, achieving close to stateof-the-art results with only 25% of the training data.", |
| "cite_spans": [ |
| { |
| "start": 193, |
| "end": 214, |
| "text": "Lewis and Gale (1994)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 217, |
| "end": 232, |
| "text": "Gasperin (2009)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 237, |
| "end": 260, |
| "text": "Schein and Ungar (2007)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 410, |
| "end": 430, |
| "text": "Sachan et al. (2015)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 574, |
| "end": 588, |
| "text": "Settles (2009)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 638, |
| "end": 651, |
| "text": "Olsson (2009)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 714, |
| "end": 732, |
| "text": "Shen et al. (2017)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "For deciding what to do with the selected samples, the dominant approach has been binary pairwise selection for potential manual coreference annotation (Gasperin, 2009; Laws et al., 2012; Zhao and Ng, 2014; Sachan et al., 2015) . This approach pairs up candidate mentions with candidate antecedents, and the annotator can discard or accept a mention-pair dependent on whether they are coreferring or not. Sachan et al. (2015) introduced must-link (ML) and cannot-link (CL) constraints as a method of storing user annotations. The mention-pairs which where deemed coreferent received the ML constraint, and the ones deemed not coreferent received the CL constraint, where the coreference likelihood of those pairs was set to 1 and 0 respectively. Applying transitivity (if A is coreferent with B, and B with C, then A and C must also be coreferent) to these constraints means more labels can be distributed without extra labelling.", |
| "cite_spans": [ |
| { |
| "start": 152, |
| "end": 168, |
| "text": "(Gasperin, 2009;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 169, |
| "end": 187, |
| "text": "Laws et al., 2012;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 188, |
| "end": 206, |
| "text": "Zhao and Ng, 2014;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 207, |
| "end": 227, |
| "text": "Sachan et al., 2015)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 405, |
| "end": 425, |
| "text": "Sachan et al. (2015)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Li et al. (2020) improved on the mention-pair constraints by using span embeddings instead of mentions, as successfully applied to coreference resolution in Lee et al. (2017) . They also augmented the pair-wise annotation with a second step of marking the first occurrence of the entity if the span pair is not coreferent, introducing the notion of discrete annotations.", |
| "cite_spans": [ |
| { |
| "start": 157, |
| "end": 174, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The marking of the first occurrence of the entity allows the annotator to cluster the entities. Together with the notion of transitivity, this makes annotation more efficient, as it makes use of some false negatives. However, this approach, though better than pairwise decision, still does not make use of the false positives. It also ignores readily available information about other occurrences of the entity in question.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "It takes time for an annotator to find the first sample of the highlighted entity, particularly if the document they are labelling is more than a few sentences. When the annotator has spent the time finding the first occurrence of the entity, they will have identified many, if not all, of the other occurrences of that entity, and it will be relatively cheap to annotate all the occurrences in the document. A good interface will have predicted and highlighted these occurrences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "If the sample turns out to be negative, e.g. by the proform span (the span in question, as opposed to the antecedent span) being the first span in the document, then allowing the annotator to label cataphoric spans would also contribute towards the goal of increasing annotator efficiency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The setup in Li et al. (2020) allows a candidate coreferent pair to be disregarded in three ways, where only the third way should be a valid reason for disregarding:", |
| "cite_spans": [ |
| { |
| "start": 13, |
| "end": 29, |
| "text": "Li et al. (2020)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "1. The span is incorrectly identified, and is not a valid noun phrase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "2. The span is the first mention of that entity (and thus has no antecedent).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "3. The span is the only mention of that entity in the document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The following section will elaborate on the experiments to improve upon these shortcomings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The experiments reported in this paper investigate a set of different methods for conducting manual annotation during an active learning scenario.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Previous approaches to active learning for coreference resolution have focused primarily on antecedent labelling, ignoring potential occurrences following an entity. The OntoNotes dataset is not made with specific cataphoric linkings. This makes it more difficult to test how well the system performs when adding cataphoric data. It is still however possible to retrieve cataphoric mentions of an entity from the dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discrete annotation with cataphoric links", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Even though the sample selection algorithm will only select entities with a candidate antecedent, it should be possible for the annotator to choose cataphoric occurrences. Our simulated experiment will test whether allowing the annotator to select cataphoric mentions will have an impact on how many label queries are disregarded.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discrete annotation with cataphoric links", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "This is motivated by the experience that it is easier to label multiple spans of the same entity in the same document than it is to annotate just one instance, even if the document contains several occurrences of that entity. Even though more samples are being labelled, and those samples are not necessarily the most informative ones, they will still provide more information per query and per clock-time than strictly pair-wise or discrete annotation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotating all spans for the queried entity in the document", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The improvement would be made by adding multiple ML and CL constraints for each query. Every time a suggested pair is not the final pair of that query a CL constraint is applied, and every label the annotator selects receives a ML constraint. This, combined with transitivity constraints (elaborated in Li et al. (2020)), is hypothesised to increase the amount of information available to the learner.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotating all spans for the queried entity in the document", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Whether the annotator is helped by interface highlighting of predictions or not, a potential challenge with asking an annotator to label all occurrences of an entity in the document is that they are susceptible to losing focus due to boredom or time pressure. In these situations it is plausible that there will be a certain amount of error. Taking inspiration from Sachan et al. (2015) , which included user labelling error as a hyperparameter, we include labelling error in our experiments.", |
| "cite_spans": [ |
| { |
| "start": 366, |
| "end": 386, |
| "text": "Sachan et al. (2015)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation error", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In previous approaches to active learning for coreference resolution, when an annotator is queried with a span which is incorrectly identified as a span, that query is disregarded. There is no difference between a CL constraint because of correctly identified spans not linking, and a CL constraint caused by correctly linked but incorrectly identified spans. These kinds of boundary errors are common in entity recognition, and these frequent errors can have a big impact on downstream performance. In the discrete annotation, Li et al. (2020) improved this problem by making the user click all the words in the antecedent span, building the span word by word. However, they did not allow the user to correct the proform span. This limitation also applies to their simulated experiments.", |
| "cite_spans": [ |
| { |
| "start": 528, |
| "end": 544, |
| "text": "Li et al. (2020)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Enabling span editing and annotating all spans", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We therefore allow the user to correct the proform span. The method for manually correcting the proform span is letting the annotator choose which words belong to the span. In the simulated experiment we scan the indeces of all spans in that document for the closest span that belongs to a coreference cluster in the dataset. We then find an antecedent to the new proform, and make a new ML constraint, leaving a CL constraint to the initial candidate pair. If the nearest span is not coreferent with any other span in the document, the incorrectly identified span is unlikely to be a boundary error, and the query is therefore disregarded as not coreferring.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Enabling span editing and annotating all spans", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We compare the baseline discrete labelling system versus enhanced labelling using the standard English CoNLL-2012 coreference resolution dataset (Pradhan et al., 2012). Following both Li et al. (2020) and Sachan et al. (2015) , user labelling is simulated from the gold standard labels in the CoNLL dataset.", |
| "cite_spans": [ |
| { |
| "start": 184, |
| "end": 200, |
| "text": "Li et al. (2020)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 205, |
| "end": 225, |
| "text": "Sachan et al. (2015)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In the field of coreference resolution there are multiple ways of scoring a system, each with their own benefits and drawbacks. A somewhat standardised option, and the one chosen to evaluate the experiments reported in this paper, is to combine the recall and precision from MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998) and CEAFe (Luo, 2005) as an average F1 score. We compute this score with the official CONLL-2012 evaluation scripts.", |
| "cite_spans": [ |
| { |
| "start": 279, |
| "end": 300, |
| "text": "(Vilain et al., 1995)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 307, |
| "end": 332, |
| "text": "(Bagga and Baldwin, 1998)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 343, |
| "end": 354, |
| "text": "(Luo, 2005)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation metric", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We also compare the amount of successful queries in each AL session as a metric of how successful the annotation approach is at providing positive training examples. A successful query is a query which returns a coreferent pair, regardless of whether the original proform or antecedent candidate were coreferent or not. This way, there will be at least one ML constraint from that query. An unsuccessful query does not return a coreferent pair, and the only thing that can be learnt from that query is that the original proform and antecedent candidates are not coreferent, resulting in only one CL constraint.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation metric", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For the sake of comparison we use the same coreference model as in (Li et al., 2020) . They use the AllenNLP implementation of Lee et al. (2017) , which keeps all the hyperparameters, except that it excludes speaker features, variational dropout and limits the maximum number of considered antecedents to 100. In Lee et al. (2017) , they use GloVe embeddings (Pennington et al., 2014) as word embeddings. They use a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) , where the hidden states have 200 dimensions, to represent the aggregated word spans. The model internal scoring for determining whether a span is a mention, and whether two mentions are coreferring, is using feed-forward neural networks consisting of two hidden layers with 150 dimensions and rectified linear units (Nair and Hinton, 2010) . The optimiser used is ADAM (Kingma and Ba, 2014).", |
| "cite_spans": [ |
| { |
| "start": 67, |
| "end": 84, |
| "text": "(Li et al., 2020)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 127, |
| "end": 144, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 313, |
| "end": 330, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 359, |
| "end": 384, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 435, |
| "end": 469, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 788, |
| "end": 811, |
| "text": "(Nair and Hinton, 2010)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural network architecture", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We ran simulated AL experiments with the OntoNotes 2012 Coreference Resolution dataset using the following setup. Each experiment is based on Li et al. (2020), using their entropy selector as sample selection algorithm, selecting 20 queries from each document. The OntoNotes is split into 2802 training documents, 343 validation documents and 348 testing documents. The validation set is used to compute F1 score while training, whereas the test set is used only for final F1 score computation after training has finished.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "A 700-document subset of the training data is set aside, and the initial model is trained on this subset. The model trains until convergence with a patience of 2 epochs, up to 20 epochs, before adding more data. Then 280 documents are labelled in an AL session. After these 280 documents are labelled, they are added to the 700 documents, and training continues on the now 980 documents in the set aside training subset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "This continues until all the 2802 documents in the training set have been labelled. Finally, a new model trained on all the 2802 training documents with all the model and training parameters reset. This last step is to make the final model comparable to other models trained without AL, and use the same hyperparameter as Lee et al. (2017) . There are 20 span-pair queries per document in the AL session, meaning 5600 queries per AL session, and a total of 39200 queries over the 8 AL sessions.", |
| "cite_spans": [ |
| { |
| "start": 322, |
| "end": 339, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For labelling with error, 10% of the labels retrieved in the annotation session are set to a random span in the document. We implement this by introducing a 10% chance of having a random span chosen instead of a coreferring span. This is to prevent the erroneous labels systematically having the same index each AL session.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We include one baseline experiment from Li et al. (2020) . The experiment is using discrete annotation with the same parameters as our experiments, but we report the F1 score for the baseline with the best performing experiment from Li et al. (2020) , which uses a query-by-committee system with three models. This is done to compare the results of our experiments to the currently best performing coreference resolution system using AL.", |
| "cite_spans": [ |
| { |
| "start": 40, |
| "end": 56, |
| "text": "Li et al. (2020)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 233, |
| "end": 249, |
| "text": "Li et al. (2020)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In the baseline experiment and Experiments 1 and 3, the annotator is only allowed to select one occurrence of the proform entity. In Experiment 2 the annotator labels all the anteceding occurrences of the proform, whereas in 4 and 5 the annotator labels all the occurrences of that entity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We also perform a timed annotation exercise with the same setup as in Li et al. (2020) . We recruited 10 annotators with experience in text processing, who annotated for 30 minutes each. Li et al. (2020) used annotators with NLP experience, whereas our annotators did not that but are skilled in working with speech transcripts. This might impact the absolute annotation time, but the relative annotation time within our group of annotators should still be informative. The annotators in Li et al. (2020) were asked a pair-wise question first, and in the case of non-coreference they were asked to annotate the first instance of the entity. In contrast, we asked our annotators to label all instances of the entity in the case. When an annotator provided only one extra instance of the entity, that was noted as a \"follow-up question\", whereas when they labelled more than one extra instance of the entity it was noted as a \"multi-response\". We used the same annotation interface as in Li et al. (2020) , but altered it to allow cataphoric labelling as well as multiple labels per query. Table 1 shows the results from our timed annotation exercise. In our experiment the annotators spent longer on the initial question (20.66 s), but were faster on supplying answers for the follow-up question (12.61 s). When annotating more than one extra occurrence, the time taken for each of those occurrences was lower than answering the initial question.", |
| "cite_spans": [ |
| { |
| "start": 70, |
| "end": 86, |
| "text": "Li et al. (2020)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 187, |
| "end": 203, |
| "text": "Li et al. (2020)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 488, |
| "end": 504, |
| "text": "Li et al. (2020)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 986, |
| "end": 1002, |
| "text": "Li et al. (2020)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1088, |
| "end": 1095, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The average normalised annotation time per occurrence was 16.57 seconds. In contrast, the annotators' median normalised annotation time was only 10.26 seconds per occurrence. This indicates that the distribution of annotation times is higher at the lower end, and that there were a few queries with very Table 1 : Results for the timed annotation exercise. We first list the results from the corresponding timed exercise reported in Li et al. (2020) . The fourth and fifth results for our equivalent experiments, with the exception that the annotators were allowed to select any instance of the entity in the follow-up, not just the first. The final time in the table is the average time taken for the annotators to label every instance of the entity, normalised by the number of labels in each query. Validation F1 score for each epoch in training #0: Baseline system #1: Single following span #2: All anteceding spans #4: All spans #5: 10% error Figure 2 : The F1 score while training for each experiment. This score is computed using the validation dataset. As expected, the scores are similar at the earlier stages, when the model is trained on the same number of labels. For the later epochs the models trained on more labels, Experiment 2 and 4, perform marginally better than the other models. The dip in F1 score around epoch 50 represent the retraining of the model from scratch after all the documents have been labelled.", |
| "cite_spans": [ |
| { |
| "start": 433, |
| "end": 449, |
| "text": "Li et al. (2020)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 304, |
| "end": 311, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 948, |
| "end": 956, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "long times which might have skewed the average. The fastest annotations for the multi-response queries were made in 2.07 when normalised for the number of labels annotated in that query. The slowest annotations took 124.95 seconds. Figure 2 plots the F1 score over the training epochs, using the validation data. The improvements in F1 over the epochs are very similar for each of the training methods in the early stages, but in the later stages the active learning approaches which allow multiple labelling come out on top.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 232, |
| "end": 240, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In the baseline experiment 49% of the queries return a coreferent label pair, which means over half of the queries did not result in a ML constraint. In Experiment 1 that number is increased to 54%, as can be seen in Table 2 . This is a reduction of disregarded queries by 11%. In Experiment 2 and 4 the simulated annotator is instructed to label all the occurrences of the entity in the given document, which results in several label pairs per query. For Experiment 2 there are 0.93 label pairs per query, whereas for Experiment 4 there are 1.41 label pairs per query.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 217, |
| "end": 224, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "There was no difference between the labels retrieved for Experiment 3, where the annotator was allowed to edit proform spans and the results for the baseline experiment. A total of 6 spans were edited # Experiment Successful labels per query CONLL F1 score 0 Discrete annotation (Li et al., 2020) Table 2 : Experiments for the AL models, with the F1 score representing the performance on the final models on the test set. The \"Successful label per query\" column explains how many queries returned with positive coreferent pairs. The F1 score for the baseline (Experiment 0) is achieved using a sample selector with the query-by-committee approach. When Experiment 2 and 4 are close to and exceeding 1 that is because they are returning more than one label pair per query. under the simulated experiment.", |
| "cite_spans": [ |
| { |
| "start": 279, |
| "end": 296, |
| "text": "(Li et al., 2020)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 297, |
| "end": 304, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In Figure 3 the label pair counts are separated into the active learning sessions, and normalised by average document length for that session. This measure can be seen as an average number of successful label-pairs per document. In Experiment 1 there are marginally more labels successfully identified than in the baseline system. For both Experiment 2 and 4 the AL sessions provide many more label pairs per document, up to an average 27.16 label pairs for Experiment 4 in AL session 6. The efficacy of the combined model is reduced when 10% labelling error is added in each AL session, but Experiment 5 still provided more labels than the baseline system.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 11, |
| "text": "Figure 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "The timed annotation exercise show that the cost of annotating all the labels of an entity in a text is low when the annotator has already read the text to make a judgement on the initial coreference pair. The results also show that there might be a cut tail distribution of annotation times. The majority of the multi-response annotations were faster than the initial and the single-response follow-up question responses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "On average it took our annotators longer time for the initial question in our implementation of the same timed annotation exercise as in Li et al. (2020) , but shorter time for the follow-up question. People working in NLP are likely to be more experienced with seeing text containing bracketed annotation. It is possible that our set of annotators were slower at responding for the initial question because of the lack of experience in NLP.", |
| "cite_spans": [ |
| { |
| "start": 137, |
| "end": 153, |
| "text": "Li et al. (2020)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "One reason the average time for answering the follow-up question was lower in our setup might be that the annotators were allowed to label any instance of the occurrence, not just the first. Particularly for longer texts it might be faster to label an occurrence closer to the proform entity than the first occurrence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "From Figure 3 we can see that the labelling approach in Experiment 1 returns more labels per query than the baseline approach, through the AL sessions. The same is true for Experiment 4 and 2 respectively. This indicates that cataphoric occurrences contain unused information, which should be used for training. The sudden jump in successful queries in AL sessions 6 and 7 for Experiment 2 and 4 can partly be ascribed to an increase in document length in those sessions, even though the graph is normalised to document length. This might mean that models trained on datasets with longer documents are able to benefit more from the improved label retrieval rate.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 13, |
| "text": "Figure 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Even with 10% of the labels chosen at random the combined approach retrieved more successful label pairs than the baseline system, but the final F1 score was somewhat lower. This lower score F1 was expected, as the erroneous labelling would add confusion to the model. Care should therefore be taken when designing a labelling system to ensure that errors are minimised.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The small improvement in the validation F1 score shown in Figure 2 indicates that the added labels under the current system do not translate into having an impact on how fast high accuracy is achieved. Despite this, the final F1 score on the separate test data is marginally higher for Experiment 4 than the baseline experiment.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 58, |
| "end": 66, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "This lack of impact could have several causes. As the machine learning algorithm is the same as in the baseline system, it might not be best suited to make use of the extra available information. In addition, the OntoNotes dataset does not inherently support cataphoric linking of entities, so a dataset which does contain inherent cataphoric links might also contribute towards making use of the extracted data more efficiently.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The negative results for Experiment 3 can have multiple causes. One of these is that the algorithm for selecting replacement proform spans was purposefully conservative in choosing the closest span. This was to retain ecological validity in the annotation simulation, as an annotator would look close to the span to determine whether the error was a boundary error.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The contribution of the research in this article is the improved techniques for extracting more information from user labelling. We have seen that allowing annotators to leverage cataphoric information, especially in combination with annotating several occurrences per query, can contribute to optimising the time spent by annotators hand labelling a dataset. Even though the machine learning models did not perform markedly better earlier in the training process, the amount of disregarded queries dropped by a noticeable amount just by adding cataphoric labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Research", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We have also seen that the amount of successful label pairs per query is over 1 for the approaches allowing multiple responses. This means that it is possible to extract much more information than with previous approaches. Our timed annotation exercise indicate that labelling several occurrences of an entity in the same query is faster than answering multiple queries with only one set of labels. It would be interesting to investigate whether choosing labels closer or further from the proform label would have an impact on the learning. These findings are interesting for the real world application of coreference resolution systems, particularly for long form documents, such as in the legal sector, where there is a lot more information to leverage than in short form documents. A future project would look into making changes to the machine learning model for more effective use of the new data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Research", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Future research would also look into testing which interface design would best aid the human annotator in the labelling process, especially for long form documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Research", |
| "sec_num": "6" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Evaluating automated and manual acquisition of anaphora resolution strategies", |
| "authors": [ |
| { |
| "first": "Chinatsu", |
| "middle": [], |
| "last": "Aone", |
| "suffix": "" |
| }, |
| { |
| "first": "Scott", |
| "middle": [ |
| "William" |
| ], |
| "last": "Bennett", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 33rd annual meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "122--129", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chinatsu Aone and Scott William Bennett. 1995. Evaluating automated and manual acquisition of anaphora resolution strategies. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, pages 122-129. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Algorithms for scoring coreference chains", |
| "authors": [ |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Bagga", |
| "suffix": "" |
| }, |
| { |
| "first": "Breck", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "The first international conference on language resources and evaluation workshop on linguistics coreference", |
| "volume": "1", |
| "issue": "", |
| "pages": "563--566", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first international conference on language resources and evaluation workshop on linguistics coreference, volume 1, pages 563- 566. Granada.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Entity-centric coreference resolution with model stacking", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1405--1415", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Clark and Christopher D Manning. 2015. Entity-centric coreference resolution with model stacking. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1405-1415.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Deep reinforcement learning for mention-ranking coreference models", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1609.08667" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Clark and Christopher D Manning. 2016a. Deep reinforcement learning for mention-ranking coreference models. arXiv preprint arXiv:1609.08667.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Improving coreference resolution by learning entity-level distributed representations", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1606.01323" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Clark and Christopher D Manning. 2016b. Improving coreference resolution by learning entity-level dis- tributed representations. arXiv preprint arXiv:1606.01323.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A machine learning approach to anaphoric reference", |
| "authors": [ |
| { |
| "first": "Dennis", |
| "middle": [], |
| "last": "Connolly", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "David S", |
| "middle": [], |
| "last": "Burger", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Day", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "New methods in language processing", |
| "volume": "", |
| "issue": "", |
| "pages": "133--144", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dennis Connolly, John D Burger, and David S Day. 1997. A machine learning approach to anaphoric reference. In New methods in language processing, pages 133-144.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1810.04805" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Easy victories and uphill battles in coreference resolution", |
| "authors": [ |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1971--1982", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1971-1982.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Active learning for anaphora resolution", |
| "authors": [ |
| { |
| "first": "Caroline", |
| "middle": [], |
| "last": "Gasperin", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Caroline Gasperin. 2009. Active learning for anaphora resolution. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 1-8.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Coreference resolution in a modular, entity-centered model", |
| "authors": [ |
| { |
| "first": "Aria", |
| "middle": [], |
| "last": "Haghighi", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "385--393", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aria Haghighi and Dan Klein. 2010. Coreference resolution in a modular, entity-centered model. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 385-393. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Incorporating contextual cues in trainable models for coreference resolution", |
| "authors": [ |
| { |
| "first": "Ryu", |
| "middle": [], |
| "last": "Iida", |
| "suffix": "" |
| }, |
| { |
| "first": "Kentaro", |
| "middle": [], |
| "last": "Inui", |
| "suffix": "" |
| }, |
| { |
| "first": "Hiroya", |
| "middle": [], |
| "last": "Takamura", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuji", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 EACL Workshop on The Computational Treatment of Anaphora", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryu Iida, Kentaro Inui, Hiroya Takamura, and Yuji Matsumoto. 2003. Incorporating contextual cues in train- able models for coreference resolution. In Proceedings of the 2003 EACL Workshop on The Computational Treatment of Anaphora.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Bert for coreference resolution: Baselines and analysis", |
| "authors": [ |
| { |
| "first": "Mandar", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Daniel", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Weld", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1908.09091" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mandar Joshi, Omer Levy, Daniel S Weld, and Luke Zettlemoyer. 2019. Bert for coreference resolution: Baselines and analysis. arXiv preprint arXiv:1908.09091.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Spanbert: Improving pre-training by representing and predicting spans", |
| "authors": [ |
| { |
| "first": "Mandar", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yinhan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Daniel", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Weld", |
| "suffix": "" |
| }, |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "8", |
| "issue": "", |
| "pages": "64--77", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.6980" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Active learning for coreference resolution", |
| "authors": [ |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "Laws", |
| "suffix": "" |
| }, |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "Heimerl", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "508--512", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Florian Laws, Florian Heimerl, and Hinrich Sch\u00fctze. 2012. Active learning for coreference resolution. In Pro- ceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, pages 508-512, Montr\u00e9al, Canada, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "End-to-end neural coreference resolution", |
| "authors": [ |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Luheng", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1707.07045" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. arXiv preprint arXiv:1707.07045.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A sequential algorithm for training text classifiers", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "William A", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gale", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "SIGIR'94", |
| "volume": "", |
| "issue": "", |
| "pages": "3--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David D Lewis and William A Gale. 1994. A sequential algorithm for training text classifiers. In SIGIR'94, pages 3-12. Springer.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Active learning for coreference resolution using discrete annotation", |
| "authors": [ |
| { |
| "first": "Belinda", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Stanovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2004.13671" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Belinda Li, Gabriel Stanovsky, and Luke Zettlemoyer. 2020. Active learning for coreference resolution using discrete annotation. arXiv preprint arXiv:2004.13671.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "On coreference resolution performance metrics", |
| "authors": [ |
| { |
| "first": "Xiaoqiang", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the conference on human language technology and empirical methods in natural language processing", |
| "volume": "", |
| "issue": "", |
| "pages": "25--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of the conference on human language technology and empirical methods in natural language processing, pages 25-32. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Using decision trees for coreference resolution", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Joseph", |
| "suffix": "" |
| }, |
| { |
| "first": "Wendy", |
| "middle": [ |
| "G" |
| ], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lehnert", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph F McCarthy and Wendy G Lehnert. 1995. Using decision trees for coreference resolution. arXiv preprint cmp-lg/9505043.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Rectified linear units improve restricted boltzmann machines", |
| "authors": [ |
| { |
| "first": "Vinod", |
| "middle": [], |
| "last": "Nair", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In ICML.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Supervised noun phrase coreference research: The first fifteen years", |
| "authors": [ |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th annual meeting of the association for computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1396--1411", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of the 48th annual meeting of the association for computational linguistics, pages 1396-1411. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Automatic content extraction (ace)", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nist", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "NIST. 2004. Automatic content extraction (ace).", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "A literature survey of active machine learning in the context of natural language processing", |
| "authors": [ |
| { |
| "first": "Fredrik", |
| "middle": [], |
| "last": "Olsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fredrik Olsson. 2009. A literature survey of active machine learning in the context of natural language processing.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word represen- tation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sameer Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuchen", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Joint Conference on EMNLP and CoNLL-Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "1--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL-Shared Task, pages 1-40. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "An active learning approach to coreference resolution", |
| "authors": [ |
| { |
| "first": "Mrinmaya", |
| "middle": [], |
| "last": "Sachan", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [ |
| "P" |
| ], |
| "last": "Xing", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Twenty-Fourth International Joint Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mrinmaya Sachan, Eduard Hovy, and Eric P Xing. 2015. An active learning approach to coreference resolution. In Twenty-Fourth International Joint Conference on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Active learning for logistic regression: an evaluation", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Schein", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Lyle", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ungar", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Machine Learning", |
| "volume": "68", |
| "issue": "", |
| "pages": "235--265", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew I Schein and Lyle H Ungar. 2007. Active learning for logistic regression: an evaluation. Machine Learning, 68(3):235-265.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Active learning literature survey", |
| "authors": [ |
| { |
| "first": "Burr", |
| "middle": [], |
| "last": "Settles", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Burr Settles. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Depart- ment of Computer Sciences.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Deep active learning for named entity recognition", |
| "authors": [ |
| { |
| "first": "Yanyao", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Hyokun", |
| "middle": [], |
| "last": "Yun", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Zachary", |
| "suffix": "" |
| }, |
| { |
| "first": "Yakov", |
| "middle": [], |
| "last": "Lipton", |
| "suffix": "" |
| }, |
| { |
| "first": "Animashree", |
| "middle": [], |
| "last": "Kronrod", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Anandkumar", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1707.05928" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yanyao Shen, Hyokun Yun, Zachary C Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. arXiv preprint arXiv:1707.05928.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "A model-theoretic coreference scoring scheme", |
| "authors": [ |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Vilain", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Burger", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Aberdeen", |
| "suffix": "" |
| }, |
| { |
| "first": "Dennis", |
| "middle": [], |
| "last": "Connolly", |
| "suffix": "" |
| }, |
| { |
| "first": "Lynette", |
| "middle": [], |
| "last": "Hirschman", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 6th conference on Message understanding", |
| "volume": "", |
| "issue": "", |
| "pages": "45--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings of the 6th conference on Message understanding, pages 45-52. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Mind the gap: A balanced corpus of gendered ambiguous pronouns", |
| "authors": [ |
| { |
| "first": "Kellie", |
| "middle": [], |
| "last": "Webster", |
| "suffix": "" |
| }, |
| { |
| "first": "Marta", |
| "middle": [], |
| "last": "Recasens", |
| "suffix": "" |
| }, |
| { |
| "first": "Vera", |
| "middle": [], |
| "last": "Axelrod", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Baldridge", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "6", |
| "issue": "", |
| "pages": "605--617", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kellie Webster, Marta Recasens, Vera Axelrod, and Jason Baldridge. 2018. Mind the gap: A balanced corpus of gendered ambiguous pronouns. Transactions of the Association for Computational Linguistics, 6:605-617.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Learning anaphoricity and antecedent ranking features for coreference resolution", |
| "authors": [ |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Wiseman", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "Matthew" |
| ], |
| "last": "Rush", |
| "suffix": "" |
| }, |
| { |
| "first": "Stuart", |
| "middle": [ |
| "Merrill" |
| ], |
| "last": "Shieber", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sam Wiseman, Alexander Matthew Rush, Stuart Merrill Shieber, and Jason Weston. 2015. Learning anaphoricity and antecedent ranking features for coreference resolution. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Coreference resolution using competition learning approach", |
| "authors": [ |
| { |
| "first": "Xiaofeng", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Guodong", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Chew Lim", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "176--183", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaofeng Yang, Guodong Zhou, Jian Su, and Chew Lim Tan. 2003. Coreference resolution using competition learning approach. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics- Volume 1, pages 176-183. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "A twin-candidate model for learning-based anaphora resolution", |
| "authors": [ |
| { |
| "first": "Xiaofeng", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Chew Lim", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "3", |
| "pages": "327--356", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaofeng Yang, Jian Su, and Chew Lim Tan. 2008. A twin-candidate model for learning-based anaphora resolu- tion. Computational Linguistics, 34(3):327-356.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Domain adaptation with active learning for coreference resolution", |
| "authors": [ |
| { |
| "first": "Shanheng", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Hwee Tou", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 5th International Workshop on Health Text Mining and Information Analysis (Louhi)", |
| "volume": "", |
| "issue": "", |
| "pages": "21--29", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shanheng Zhao and Hwee Tou Ng. 2014. Domain adaptation with active learning for coreference resolution. In Proceedings of the 5th International Workshop on Health Text Mining and Information Analysis (Louhi), pages 21-29, Gothenburg, Sweden, April. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "text": "The number of successful queries for each AL session. The sessions have been normalised for document length, as some of the sessions have significant longer document lengths. Experiment 3 is not included, as it was overlapping with the baseline system. The approaches in Experiment 2 and 4 are more effective at providing successful label pairs than the other experiments, particularly with longer documents.", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| } |
| } |
| } |
| } |