ACL-OCL / Base_JSON /prefixC /json /crac /2020.crac-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:21:28.652210Z"
},
"title": "Partially-supervised Mention Detection",
"authors": [
{
"first": "Lesly",
"middle": [],
"last": "Miculicich",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Idiap Research Institute",
"location": {
"country": "Switzerland, Switzerland"
}
},
"email": "lmiculicich@idiap.ch"
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Idiap Research Institute",
"location": {
"country": "Switzerland, Switzerland"
}
},
"email": "jhenderson@idiap.ch"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Learning to detect entity mentions without using syntactic information can be useful for integration and joint optimization with other tasks. However, it is common to have partially annotated data for this problem. Here, we investigate two approaches to deal with partial annotation of mentions: weighted loss and soft-target classification. We also propose two neural mention detection approaches: a sequence tagging, and an exhaustive search. We evaluate our methods with coreference resolution as a downstream task, using multitask learning. The results show that the recall and F1 score improve for all methods.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Learning to detect entity mentions without using syntactic information can be useful for integration and joint optimization with other tasks. However, it is common to have partially annotated data for this problem. Here, we investigate two approaches to deal with partial annotation of mentions: weighted loss and soft-target classification. We also propose two neural mention detection approaches: a sequence tagging, and an exhaustive search. We evaluate our methods with coreference resolution as a downstream task, using multitask learning. The results show that the recall and F1 score improve for all methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Mention detection is the task of identifying text spans referring to an entity: named, nominal or pronominal (Florian et al., 2004) . It is a fundamental component for several downstream tasks, such as coreference resolution (Soon et al., 2001) , and relation extraction (Mintz et al., 2009) ; and it can help to maintain coherence in large text generation (Clark et al., 2018) , and contextualized machine translation (Miculicich et al., 2018) . Previous studies tackled mention detection jointly with named entity recognition (Xu et al., 2017; Katiyar and Cardie, 2018; Ju et al., 2018; Wang et al., 2018) . There, only certain types of entities are considered (e.g., person, location), and the goal is to recognize mention spans and their types. In this study, we are interested in discovering entity mentions, which can potentially be referred to in the text, without the use of syntactic parsing information. Our long term objective is to have a model that keeps track of entities in a document for word disambiguating language modeling and machine translation.",
"cite_spans": [
{
"start": 109,
"end": 131,
"text": "(Florian et al., 2004)",
"ref_id": "BIBREF3"
},
{
"start": 225,
"end": 244,
"text": "(Soon et al., 2001)",
"ref_id": null
},
{
"start": 271,
"end": 291,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 357,
"end": 377,
"text": "(Clark et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 419,
"end": 444,
"text": "(Miculicich et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 528,
"end": 545,
"text": "(Xu et al., 2017;",
"ref_id": "BIBREF21"
},
{
"start": 546,
"end": 571,
"text": "Katiyar and Cardie, 2018;",
"ref_id": "BIBREF9"
},
{
"start": 572,
"end": 588,
"text": "Ju et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 589,
"end": 607,
"text": "Wang et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Data from coreference resolution is suitable for our task, but the annotation is partial in that it contains only mentions that belong to a coreference chain, not singletons. Nevertheless, the missing mentions have approximately the same distribution as the annotated ones, so we can still learn this distribution from the data. Figure 1 shows an example from Ontonotes V.5 dataset (Pradhan et al., 2012) where \"the taxi driver\" is annotated in sample 1 but not in 2. Thus, we approach mention detection as a partially supervised problem and investigate two simple techniques to compensate for the fact that some negative examples are true mentions: weighted loss functions and soft-target classification. By doing this, the model is encouraged to predict more false-positive samples, so it can detect potential mentions which were not annotated. We implement two neural mention detection methods: a sequence tagging approach, and an exhaustive search approach. The first method is novel, whereas the other is similar to previous work (Lee et al., 2017) . We evaluate both techniques for coreference resolution by implementing a multitask learning system. We show that the proposed techniques help the model increase recall significantly with a minimal decrease in precision. In consequence, the F1 score of the mention detection and coreference resolution improves for both methods, and the exhaustive search approach yields a significant improvement over the baseline coreference resolver.",
"cite_spans": [
{
"start": 382,
"end": 404,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF17"
},
{
"start": 1035,
"end": 1053,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 329,
"end": 337,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "i We investigate two techniques to deal with partially annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http: //creativecommons.org/licenses/by/4.0/. ii We propose a sequence tagging method for mention detection that can model nested mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "iii We improve an exhaustive search method for mention detection. iv We approach mention detection and coreference resolution as multitask learning and improve both tasks' recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Sections 2 and 3 describe the two mention detection approaches we use in our experiments. Section 4 presents the proposed methods to deal with partially annotated mentions. We use coreference resolution as a proxy task for testing our methods which is described in Section 5. Section 6 contains the experimental setting and the analysis of results. Section 7 contains related work to this study. Finally, the final conclusion is drawn Section 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several studies have tackled mention detection and named entity recognition as a tagging problem. Some of them use one-to-one sequence tagging techniques (Lample et al., 2016; Xu et al., 2017) , while others use more elaborate techniques to include nested mentions (Katiyar and Cardie, 2018; Wang et al., 2018) . Here, we propose a simpler yet effective tagging approach that can manage nested mentions.",
"cite_spans": [
{
"start": 154,
"end": 175,
"text": "(Lample et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 176,
"end": 192,
"text": "Xu et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 265,
"end": 291,
"text": "(Katiyar and Cardie, 2018;",
"ref_id": "BIBREF9"
},
{
"start": 292,
"end": 310,
"text": "Wang et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence tagging model",
"sec_num": "2"
},
{
"text": "We use a sequence-to-sequence model, which allows us to tag each word with multiple labels. The words are first encoded and contextualized using a recurrent neural network, and then a sequential decoder predicts the output tag sequence. During decoding, the model keeps a pointer into the encoder, indicating the word's position, which is being tagged at each time step. The tagging is done using the following set of symbols: {[, ], +, -} . The brackets \"[\" and \"]\" indicate that the tagged word is the starting or ending of a mention respectively, the symbol \"+\" indicates that one or more mention brackets are open, and \"-\" indicates that none mention bracket is open. The pointer into the encoder moves to the next word only after predicting \"+\" or \"-\"; otherwise, it remains in the same position. Figure 2 shows a tagging example indicating the alignments of words with tags.",
"cite_spans": [],
"ref_spans": [
{
"start": 802,
"end": 810,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Sequence tagging model",
"sec_num": "2"
},
{
"text": "Given a corpus of sentences X = (x 1 , ..., x M ), the goal is to find the parameters \u0398 which maximize the log likelihood of the corresponding tag sequences Y = (y 1 , ..., y T ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence tagging model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P \u0398 (Y |X) = T t=1 P \u0398 (y t |X, y 1 , ..., y t\u22121 )",
"eq_num": "(1)"
}
],
"section": "Sequence tagging model",
"sec_num": "2"
},
{
"text": "The next tag probability is estimated with a softmax over the output vector of a neural network:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence tagging model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P \u0398 (y t |X, y 1 , ..., y t\u22121 ) = sof tmax(o t ) (2) o t = relu(W o \u2022 [d t , h i ] + b o )",
"eq_num": "(3)"
}
],
"section": "Sequence tagging model",
"sec_num": "2"
},
{
"text": "where W o , b o are parameters of the network, d t is the vector representation of the tagged sequence at time-step t, modeled with a long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) , and h i is the vector representation of the pointer's word at time t contextualized with a bidirectional LSTM (Graves and Schmidhuber, 2005) . where the decoder is initialized with the last states of the bidirectional encoder,",
"cite_spans": [
{
"start": 164,
"end": 198,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 311,
"end": 341,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence tagging model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(h 1 , ..., h M ) = BiLST M (X) (4) d t = LST M (y 1 , ..., y t\u22121 )",
"eq_num": "(5)"
}
],
"section": "Sequence tagging model",
"sec_num": "2"
},
{
"text": "d 0 = h M .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence tagging model",
"sec_num": "2"
},
{
"text": "The i-th word pointed to at time t is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence tagging model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i \u2190 \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 0, if t = 0 i + 1, if t > 0 and y t\u22121 \u2208 {+, -} i, otherwise",
"eq_num": "(6)"
}
],
"section": "Sequence tagging model",
"sec_num": "2"
},
{
"text": "At decoding time, we use a beam search approach to obtain the sequence. The complexity of the model is linear with respect to the number of words. It can be parallelized at training time, given that it uses ground-truth data for the conditioned variables. However, it cannot be parallelized during decoding because of its autoregressive nature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence tagging model",
"sec_num": "2"
},
{
"text": "Our span scoring model of mention detection is similar to the work of Lee et al. (2017) for solving coreference resolution, and to Ju et al. (2018) for nested named mention detection, as both are exhaustive search methods. The objective is to score all possible spans m ij in a document, where i and j are the starting and ending word positions of the span in the document. For this purpose, we minimize the binary cross-entropy with the labels y:",
"cite_spans": [
{
"start": 70,
"end": 87,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF11"
},
{
"start": 131,
"end": 147,
"text": "Ju et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Span scoring model",
"sec_num": "3"
},
{
"text": "H(y, P \u0398 (m)) = \u2212 1 M 2 M i=1 M j=1 (y m ij * log(P \u0398 (m ij )) + (1\u2212y m ij ) * log(1\u2212P \u0398 (m ij )) ) (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span scoring model",
"sec_num": "3"
},
{
"text": "where \u0398 are the parameters of the model, y m ij \u2208 [0, 1] is one when there is a mention from position i to j. If y m ij is zero when there is no mention annotated, this is the same as maximizing the log-likelihood. Nevertheless, we will consider models where this is not the case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span scoring model",
"sec_num": "3"
},
{
"text": "The probability of detection is estimated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span scoring model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P \u0398 (m ij ) = \u03c3(V \u2022 relu(W m \u2022 m ij + b m )) (8) m ij =relu(W h \u2022 [h i , h j ,x ij ] + b h )",
"eq_num": "(9)"
}
],
"section": "Span scoring model",
"sec_num": "3"
},
{
"text": "where V, W m , W h are weight parameters of the model, b m , b h are biases, and m ij is a representation of the span from position i to j. It is calculated with the contextualized representations of the starting and ending words h i , h j , and the average of the word embeddingsx ij :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span scoring model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(h 1 , ..., h M ) = BiLST M (X)",
"eq_num": "(10)"
}
],
"section": "Span scoring model",
"sec_num": "3"
},
{
"text": "x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span scoring model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ij = 1 j \u2212 i j k=i x k",
"eq_num": "(11)"
}
],
"section": "Span scoring model",
"sec_num": "3"
},
{
"text": "The complexity of this model is quadratic with respect to the number of words. However, it can be parallelized at training and decoding time. Lee et al. (2017) uses an attention function over the embeddings instead of an average. That approach is less memory efficient and requires the maximum length of spans as a hyperparameter. Also, they include embeddings of the span lengths which are learned during training. As shown in the experimental part, these components do not improve the performance of our model.",
"cite_spans": [
{
"start": 142,
"end": 159,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Span scoring model",
"sec_num": "3"
},
{
"text": "The partial annotation of coreference data for mention detection means that not labeled spans may be true mentions of entities. Thus, the approach of treating spans without mention annotations as true negative examples would be incorrect. On the other hand, the ideal solution of sampling all possible mention annotations, which are consistent with the given partial annotation, would be intractable. We want to modify the model's loss function in such a way that, if the system predicts a false-positive, the loss is reduced. This encourages the model to favor recall over precision by predicting more mention-like spans, even when they are not labeled. We assume that it is possible to learn the true mention distribution using the annotated mention samples by extrapolating the non-annotated mentions, and we propose two ways to encourage the model to do so.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially annotated data",
"sec_num": "4"
},
{
"text": "We use a weighted loss function with weight w\u2208{0, 1} for negative examples only. The sequence tagging model makes word-wise decisions; thus, we consider words tagged as \"out of mention\", y t =\"-\", as negative examples, while the rest are positives. Although this simplification has the potential to increase inconsistencies, e.g., having non-ending or overlapping mentions, we observe that the LSMT-based model can capture the simple grammar of the tag labels with very few mistakes. For span scoring, the distinction between negative and positive examples is clear, given that the decisions are made for each span.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted loss function:",
"sec_num": null
},
{
"text": "Soft-target classification: Soft-targets allow us to have a distribution over all classes instead of having a single class annotation. Thus, we applied soft-targets to negative examples to reflect the probability that they could actually be positive ones. For sequence tagging, we set the target of negative examples, y t =\"-\", to (\u03c1, \u03c1, \u03c1, 1 \u2212 3\u03c1) corresponding to the classes ([, ], +, -). For span scoring, we change the target of negative examples to y neg =\u03c1. In both cases, \u03c1 is the probability of the example being positive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted loss function:",
"sec_num": null
},
{
"text": "We use multitask learning to train the mention detection together with coreference resolution. The weights to sum the loss functions of each task are estimated during training, as in Cipolla et al. (2018) . The sentence encoder is shared, and the output of mention detection serves as input to coreference resolution. We use the coreference resolver proposed by Lee et al. (2017) . It uses a pair-wise scoring function s between a mention m k and each of its candidate antecedents m a , defined as:",
"cite_spans": [
{
"start": 183,
"end": 204,
"text": "Cipolla et al. (2018)",
"ref_id": "BIBREF0"
},
{
"start": 362,
"end": 379,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(m k , m a ) = s c (m k , m a ) + s m (m k ) + s m (m a )",
"eq_num": "(12)"
}
],
"section": "Coreference Resolution",
"sec_num": "5"
},
{
"text": "where s c is a function that assesses whether two mentions refer to the same entity. We modified the mention detection score s m .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "5"
},
{
"text": "For the sequence tagging approach, the function s m serves as a bias value and it is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s m = v.P (y t i = \"[\").P (y t j = \"]\")",
"eq_num": "(13)"
}
],
"section": "Coreference Resolution",
"sec_num": "5"
},
{
"text": "where y t i and y t j are the labels of the first and last words of the span, and v is a scalar parameter learned during training. At test time, only mentions in the one-best output of the mention detection model are candidate mentions for the coreference resolver. During training, the set of candidate mentions includes both the spans detected by the mention detection model and the ground truth mentions. The mention decoder is run for one pass with ground-truth labels in the conditional part of the probability function (Eq. 2), to get the mention detection loss, and run for a second pass with predicted labels to provide input for the coreferece task and compute the coreference loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "5"
},
{
"text": "For the span scoring approach, s m is a function of the probability defined in Eq. 8, scaled by a parameter v learned during training. 2017, we use a multitask objective, which adds the loss function of mention detection. We do not prune mentions with a maximum length, nor impose any maximum number of mentions per document. We use the probability of the mention detector with a threshold of \u03c4 for pruning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s m = v.P (m i,j )",
"eq_num": "(14)"
}
],
"section": "Coreference Resolution",
"sec_num": "5"
},
{
"text": "We evaluate our model on the English OntoNotes set from the CoNLL 2012 shared-task (Pradhan et al., 2012) , which has 2802 documents for training, 343 for development, and 348 for testing. The setup is the same as Lee et al. (2017) for comparison purposes, with the hyper-parameters \u03c1, w, \u03c4 optimized on the development set. We use the average F1 score as defined in the shared-task (Pradhan et al., 2012) for evaluation of mention detection and coreference resolution.",
"cite_spans": [
{
"start": 83,
"end": 105,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF17"
},
{
"start": 214,
"end": 231,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "First, we evaluate our stand-alone mention detectors. For this evaluation, all unannotated mentions are treated as negative examples. Table 1 show the results on the test set with models selected using the best F1 score with \u03c4 =0.5, on the development set. We can see that sequence tagging performs almost as well as span scoring in F1 score, even though the latter is an exhaustive search method. We also evaluate the span scoring model with different components from Lee et al. (2017) . By adding the span size vector, the precision increases but the recall decreases. Replacing the average embeddingx with attention over the embeddings requires a limited span size for memory efficiency, resulting in decreased performance. Table 2 shows the results obtained for our multitask systems for coreference resolution and mention detection with and without the loss modification. The sequence tagging method obtains lower performance compared to span scoring. This result can be attributed to its one-best method to select mentions, in contrast to span scoring, where uncertainty is fully integrated with the coreference system. The span scoring method performs similarly to the coreference resolution baseline, showing that the naive introduction of a loss for mention detection does not improve performance (although we find it does decrease convergence time). However, adding the modified mention loss does improve coreference performance. For sequence tagging, the weighted loss results in higher performance, while for the span scoring, softtargets work best. In both cases, the recall increases with a small decrease in precision, which improves the F1 score of mention detection and improves coreference resolution. Figure 3 shows a comparison of the mention detection methods in terms of recall. The unmodified sequence tagging model achieves 73.7% recall, and by introducing a weighted loss at w=0.01, it reaches 90.5%. The lines show the variation of recall for the span scoring method with respect to the detection threshold of \u03c4 . The dotted line represents the unmodified model, while the continuous line represents the model with soft-targets at \u03c1=0.1, which shows higher recall for every \u03c4 . Lee et al. (2017) proposed the first end-to-end coreference resolution that does not require heavy feature engineering for word representations. Their mention detection is done by considering all spans in a Figure 3 : Recall of the mention scoring function with respect to the detection threshold \u03c4 . Values for the sequence tagging are referential document as the candidate mentions, and the learning signal is coming indirectly from the coreference annotation. Zhang et al. (2018) used a similar approach but introducing a direct learning signal for the mention detection, which is done by adding a loss for mention detection with a scaling factor as hyperparameter. This allows a faster convergence at training time. Lee et al. (2018) proposed a high-order coreference resolution where the mention representation are inferred over several iterations of the model. However, the mention detection part is same as in (Lee et al., 2017) . The following studies proposed improvements over this work (Fei et al., 2019; Joshi et al., 2019; Joshi et al., 2020) but maintaining the same method for mention detection. Name entity recognition has been largely studied in the community. However, many of these models ignored the nested entity names. Katiyar and Cardie (2018) presents a nested named entity recognition model using a recurrent neural network that includes extra connections to handle nested mention detection. Ju et al. (2018) uses stack layers to model the nested mentions, and (Wang et al., 2018) use an stack recurrent network. Lin et al. (2019) proposed a sequence-to-nuggets architecture for nested mention detection. uses pointer networks and adversarial learning. Shibuya and Hovy (2020) uses CRF with a iterative decoder that detect nested mentions from the outer to the inner tags. Yu et al. (2020) use a bi-affine model with a similar method as in (Lee et al., 2017) .",
"cite_spans": [
{
"start": 469,
"end": 486,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF11"
},
{
"start": 2204,
"end": 2221,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF11"
},
{
"start": 2667,
"end": 2686,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 2924,
"end": 2941,
"text": "Lee et al. (2018)",
"ref_id": "BIBREF12"
},
{
"start": 3121,
"end": 3139,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 3201,
"end": 3219,
"text": "(Fei et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 3220,
"end": 3239,
"text": "Joshi et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 3240,
"end": 3259,
"text": "Joshi et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 3445,
"end": 3470,
"text": "Katiyar and Cardie (2018)",
"ref_id": "BIBREF9"
},
{
"start": 3621,
"end": 3637,
"text": "Ju et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 3690,
"end": 3709,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 3742,
"end": 3759,
"text": "Lin et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 4002,
"end": 4018,
"text": "Yu et al. (2020)",
"ref_id": "BIBREF22"
},
{
"start": 4069,
"end": 4087,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 134,
"end": 141,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 727,
"end": 734,
"text": "Table 2",
"ref_id": null
},
{
"start": 1720,
"end": 1728,
"text": "Figure 3",
"ref_id": null
},
{
"start": 2411,
"end": 2419,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mention detection",
"sec_num": "6.1"
},
{
"text": "We investigate two simple techniques to deal with partially annotated data for mention detection and propose two methods to approach it: a Weighted loss function and a soft-target classification. We evaluate them on coreference resolution and mention detection with a multitask learning approach. We show that the techniques effectively increase the recall of mentions and coreference links with a small decrease in precision, thus, improving the F1 score. In the future, we plan to use these methods to maintain coherence over long distances when reading, translating, and generating large text, by keeping track of abstract representations of entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
}
],
"back_matter": [
{
"text": "We are grateful for the support of the Swiss National Science Foundation under the project LAOS, grant number \"FNS-30216\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Cipolla",
"suffix": ""
},
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kendall",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "7482--7491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Cipolla, Yarin Gal, and Alex Kendall. 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7482-7491. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural text generation in stories using entity representations as context",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2250--2260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth Clark, Yangfeng Ji, and Noah A. Smith. 2018. Neural text generation in stories using entity representa- tions as context. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2250-2260, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "End-to-end deep reinforcement learning based coreference resolution",
"authors": [
{
"first": "Hongliang",
"middle": [],
"last": "Fei",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dingcheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ping",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongliang Fei, Xu Li, Dingcheng Li, and Ping Li. 2019. End-to-end deep reinforcement learning based corefer- ence resolution. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A statistical model for multilingual entity detection and tracking",
"authors": [
{
"first": "R",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kambhatla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nicolov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2004,
"venue": "HLT-NAACL 2004: Main Proceedings",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Florian, H Hassan, A Ittycheriah, H Jing, N Kambhatla, X Luo, N Nicolov, and S Roukos. 2004. A statistical model for multilingual entity detection and tracking. In HLT-NAACL 2004: Main Proceedings, pages 1-8, Boston, Massachusetts, USA, May 2 -May 7. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Framewise phoneme classification with bidirectional lstm and other neural network architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural Networks",
"volume": "18",
"issue": "5-6",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, 18(5-6):602-610.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long-short term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long-short term memory. Neural Computation, 9(8):1735- 1780.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT for coreference resolution: Baselines and analysis",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5803--5808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference resolution: Baselines and analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5803- 5808, Hong Kong, China, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SpanBERT: Improving pre-training by representing and predicting spans",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "64--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A neural layered model for nested named entity recognition",
"authors": [
{
"first": "Meizhi",
"middle": [],
"last": "Ju",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1446--1459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recog- nition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446-1459, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Nested named entity recognition revisited",
"authors": [
{
"first": "Arzoo",
"middle": [],
"last": "Katiyar",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "861--871",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 861-871, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270, San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "188--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Higher-order coreference resolution with coarse-to-fine inference",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "687--692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 687-692, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adversarial transfer for named entity boundary detection with pointer networks",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Deheng",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Shang",
"suffix": ""
}
],
"year": 2019,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "5053--5059",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Li, Deheng Ye, and Shuo Shang. 2019. Adversarial transfer for named entity boundary detection with pointer networks. In IJCAI, pages 5053-5059.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Sequence-to-nuggets: Nested entity mention detection via anchor-region networks",
"authors": [
{
"first": "Hongyu",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yaojie",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Xianpei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5182--5192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019. Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5182-5192.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Document-level neural machine translation with hierarchical attention networks",
"authors": [
{
"first": "Lesly",
"middle": [],
"last": "Miculicich",
"suffix": ""
},
{
"first": "Dhananjay",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Pappas",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2947--2954",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural ma- chine translation with hierarchical attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2947-2954, Brussels, Belgium, October-November. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011, Suntec, Singapore, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Joint Conference on EMNLP and CoNLL -Shared Task",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Nested named entity recognition via second-best sequence learning and decoding",
"authors": [
{
"first": "Takashi",
"middle": [],
"last": "Shibuya",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "605--620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takashi Shibuya and Eduard Hovy. 2020. Nested named entity recognition via second-best sequence learning and decoding. Transactions of the Association for Computational Linguistics, 8:605-620.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521-544.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A neural transition-based model for nested mention recognition",
"authors": [
{
"first": "Bailin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hongxia",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1011--1017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bailin Wang, Wei Lu, Yu Wang, and Hongxia Jin. 2018. A neural transition-based model for nested mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1011-1017, Brussels, Belgium, October-November. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A local detection approach for named entity recognition and mention detection",
"authors": [
{
"first": "Mingbin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Sedtawut",
"middle": [],
"last": "Watcharawittayakul",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1237--1247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingbin Xu, Hui Jiang, and Sedtawut Watcharawittayakul. 2017. A local detection approach for named entity recognition and mention detection. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1237-1247, Vancouver, Canada, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Named entity recognition as dependency parsing",
"authors": [
{
"first": "Juntao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.07150"
]
},
"num": null,
"urls": [],
"raw_text": "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. arXiv preprint arXiv:2005.07150.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Neural coreference resolution with deep biaffine attention by joint mention detection and mention clustering",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Cicero",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Michihiro",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Yasunaga",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Zhang, Cicero Nogueira dos Santos, Michihiro Yasunaga, Bing Xiang, and Dragomir Radev. 2018. Neural coreference resolution with deep biaffine attention by joint mention detection and mention clustering. In Pro- ceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 102-107, Melbourne, Australia, July. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Samples from CoNLL 2012. Annotated mentions are within brackets, non-annotated ones are underlined.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Tagged sentence example",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"content": "<table><tr><td>Model</td><td>Rec. Prec. F1</td></tr><tr><td>Sequence tagging</td><td>73.7 77.5 75.6</td></tr><tr><td>Span scoring</td><td>72.7 79.2 75.8</td></tr><tr><td>+ span size emb.</td><td>71.6 80.1 75.6</td></tr></table>",
"html": null,
"text": "-avg. emb. + att. emb. 72.1 78.9 75.4 Mention detection evaluation Instead of the end-to-end objective ofLee et al.",
"type_str": "table"
}
}
}
}