ACL-OCL / Base_JSON /prefixK /json /K17 /K17-1023.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K17-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:08:11.692520Z"
},
"title": "Robust Coreference Resolution and Entity Linking on Dialogues: Character Identification on TV Show Transcripts",
"authors": [
{
"first": "Henry",
"middle": [
"Y"
],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Emory University Atlanta",
"location": {
"postCode": "30322",
"region": "GA",
"country": "USA"
}
},
"email": "henry.chen@emory.edu"
},
{
"first": "Ethan",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Emory University Atlanta",
"location": {
"postCode": "30322",
"region": "GA",
"country": "USA"
}
},
"email": "ethan.zhou@emory.edu"
},
{
"first": "Jinho",
"middle": [
"D"
],
"last": "Choi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Emory University Atlanta",
"location": {
"postCode": "30322",
"region": "GA",
"country": "USA"
}
},
"email": "jinho.choi@emory.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a novel approach to character identification, that is an entity linking task that maps mentions to characters in dialogues from TV show transcripts. We first augment and correct several cases of annotation errors in an existing corpus so the corpus is clearer and cleaner for statistical learning. We also introduce the agglomerative convolutional neural network that takes groups of features and learns mention and mention-pair embeddings for coreference resolution. We then propose another neural model that employs the embeddings learned and creates cluster embeddings for entity linking. Our coreference resolution model shows comparable results to other state-of-the-art systems. Our entity linking model significantly outperforms the previous work, showing the F1 score of 86.76% and the accuracy of 95.30% for character identification.",
"pdf_parse": {
"paper_id": "K17-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a novel approach to character identification, that is an entity linking task that maps mentions to characters in dialogues from TV show transcripts. We first augment and correct several cases of annotation errors in an existing corpus so the corpus is clearer and cleaner for statistical learning. We also introduce the agglomerative convolutional neural network that takes groups of features and learns mention and mention-pair embeddings for coreference resolution. We then propose another neural model that employs the embeddings learned and creates cluster embeddings for entity linking. Our coreference resolution model shows comparable results to other state-of-the-art systems. Our entity linking model significantly outperforms the previous work, showing the F1 score of 86.76% and the accuracy of 95.30% for character identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Character identification (Chen and Choi, 2016 ) is a task that identifies each mention as a character in a multiparty dialogue. 1 Let a mention be a nominal referring to a human (e.g., she, mom, Judy), and an entity be a character in the dialogue. The objective is to assign each mention to an entity, who may or may not appear as a speaker in the dialogue. For the example in Table 1 , the mention comedian is not one of the speakers in the dialogue; nonetheless, it clearly refers to a real person that may appear in some other dialogues. Identifying such mentions as actual characters requires cross-document entity resolution, which makes this task challenging.",
"cite_spans": [
{
"start": 25,
"end": 45,
"text": "(Chen and Choi, 2016",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 377,
"end": 384,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Character identification can be viewed as a task of entity linking. Most of the previous work on entity linking focuses on Wikification (Mihalcea and Csomai, 2007a; Ratinov et al., 2011a; Guo et al., 2013) . Unlike Wikification, entities in this task have no precompiled information from a knowledge base, which is another challenging aspect. This task is similar to coreference resolution in the sense that it groups mentions into entities, but distinct because it requires the identification of mention groups as real entities. Furthermore, even if it can be tackled as a coreference resolution task, only a few coreference resolution systems are designed to handle dialogues well (Rocha, 1999; Niraula et al., 2014) although several state-of-the-art systems have been proposed for the general domain (Peng et al., 2015; Clark and Manning, 2016; Wiseman et al., 2016) .",
"cite_spans": [
{
"start": 136,
"end": 164,
"text": "(Mihalcea and Csomai, 2007a;",
"ref_id": "BIBREF12"
},
{
"start": 165,
"end": 187,
"text": "Ratinov et al., 2011a;",
"ref_id": "BIBREF17"
},
{
"start": 188,
"end": 205,
"text": "Guo et al., 2013)",
"ref_id": "BIBREF8"
},
{
"start": 683,
"end": 696,
"text": "(Rocha, 1999;",
"ref_id": "BIBREF19"
},
{
"start": 697,
"end": 718,
"text": "Niraula et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 803,
"end": 822,
"text": "(Peng et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 823,
"end": 847,
"text": "Clark and Manning, 2016;",
"ref_id": "BIBREF4"
},
{
"start": 848,
"end": 869,
"text": "Wiseman et al., 2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to the nature of multiparty dialogues where speakers take turns to complete a context, character identification becomes a critical step to adapt higher-level NLP tasks (e.g., question answering, summarization) to this domain. This task can also bring another level of sophistication to intelligent personal assistants and intelligent tutoring systems. Perhaps the most challenging aspect comes from colloquial writing that consists of ironies, metaphors, or rhetorical questions. Despite all the challenges, we believe that the output of this task will enhance inference on dialogue contexts by providing finer-grained information about individuals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we augment and correct the existing corpus for character identification, and propose an end-to-end deep-learning system that combines neural models for coreference resolution and entity linking to tackle the task of character identification. The updated corpus and the source code of our models are published and publicly available. 2 This combined system utilizes the strengths from both Speaker Utterance",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Yeah, right! ... You 1 serious? Rachel Everything you 2 need to know is in that first kiss. Chandler Yeah. For us 3 , it's like the stand-up comedian 4 you 5 have to sit through before the main dude 6 starts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joey",
"sec_num": null
},
{
"text": "It's not that we 7 don't like the comedian 8 , it's that ... that's not why we 9 bought the ticket.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ross",
"sec_num": null
},
{
"text": "{You 1 } \u2192 Rachel, {us 3 , we 7,9 } \u2192 Collective, {you 2,5 } \u2192 General, {comedian 4,8 } \u2192 Generic, {dude 6 } \u2192 Other Table 1 : An example of a multiparty dialogue extracted from the corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ross",
"sec_num": null
},
{
"text": "models. We introduce a novel approach, agglomerative convolution neural network, for coreference resolution to learn mention, mention-pair, and cluster embeddings, and the results are taken as input to our entity linking model that assigns mentions to their real entities. Entities, including main characters and recurring support characters, are selected from a TV show to mimic a realistic scenario. To the best of our knowledge, this is the first end-toend model that performs character identification on multiparty dialogues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ross",
"sec_num": null
},
{
"text": "The latest coreference systems employ advanced context features in tandem with deep networks to achieve state-of-the-art performance (Clark and Manning, 2016; Wiseman et al., 2015) . Since our task is similar to coreference resolution, we take a similar approach to feature engineering by building mention and cluster embeddings with word embeddings (Clark and Manning, 2016) and include additional mention features described by Wiseman et al. (2015) . We are motivated to use convolutional networks through the work of Wu and Ma (2017), but we distinguish our approach by using deep convolution to build embeddings for character identification. Entity linking has traditionally relied heavily on knowledge databases, most notably, Wikipedia, for entities (Mihalcea and Csomai, 2007b; Ratinov et al., 2011b; Gattani et al., 2013; Francis-Landau et al., 2016) . 3 Although we do not make use of knowledge bases, our task is closely aligned to entity linking. Recent advances in entity linking are also applicable to our task since we see Francis-Landau et al. (2016) use convolutional nets to capture semantic similarity between a mention and an entity by comparing context of the mention with the description of the entity. This work validates our usage of deep learning for character identification.",
"cite_spans": [
{
"start": 133,
"end": 158,
"text": "(Clark and Manning, 2016;",
"ref_id": "BIBREF4"
},
{
"start": 159,
"end": 180,
"text": "Wiseman et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 350,
"end": 375,
"text": "(Clark and Manning, 2016)",
"ref_id": "BIBREF4"
},
{
"start": 429,
"end": 450,
"text": "Wiseman et al. (2015)",
"ref_id": "BIBREF21"
},
{
"start": 756,
"end": 784,
"text": "(Mihalcea and Csomai, 2007b;",
"ref_id": "BIBREF13"
},
{
"start": 785,
"end": 807,
"text": "Ratinov et al., 2011b;",
"ref_id": "BIBREF18"
},
{
"start": 808,
"end": 829,
"text": "Gattani et al., 2013;",
"ref_id": "BIBREF7"
},
{
"start": 830,
"end": 858,
"text": "Francis-Landau et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 861,
"end": 862,
"text": "3",
"ref_id": null
},
{
"start": 1037,
"end": 1065,
"text": "Francis-Landau et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Dialogue tracking has been an expanding task as shown by the Dialogue State Tracking Challenges hosted by Microsoft (Kim et al., 2015) . That an ongoing conversation can be dynamically tracked (Henderson et al., 2013) is exciting and applicable to our task because the state of a conversation may yield significant hints for entity linking and coreference resolution. Speaker identification, a task similar to character identification, has already shown some success with partial dialogue tracking by dynamically identifying speakers at each turn in a dialogue using conditional random field models.",
"cite_spans": [
{
"start": 116,
"end": 134,
"text": "(Kim et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 193,
"end": 217,
"text": "(Henderson et al., 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The character identification corpus created by Chen and Choi (2016) includes entity annotation of personal mentions specific to the domain of multiparty dialogues. While the corpus covers a large amount of entities that appear in the first two seasons of the TV show, Friends, some of its annotation remains ambiguous, particularly around the label Unknown.",
"cite_spans": [
{
"start": 47,
"end": 67,
"text": "Chen and Choi (2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3"
},
{
"text": "An example of Unknown mentions in a snippet of a conversation is provided in Table 1 . Mentions comedian 4,8 and dude 6 are originally labeled Unknown, but they are two different entities such that their labels should be distinguished. Even though their entities are not immediately identifiable, the Unknown label provides no clarity; thus, mentions under this label needs to be subcategorized. We propose to disambiguate these Unknown mentions (Section 3.2), comprising 10% of the annotation. Such disambiguation allows finer-grained categories of entity annotations of mentions. We believe the resultant annotations are more realistic and can be used to train more robust model on character identification.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3"
},
{
"text": "Before disambiguating the corpus, we find some recurring data malformations and errors in mention detection within the corpus. For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Correction",
"sec_num": "3.1"
},
{
"text": "Rachel: (To guy with a phone) Hello, excuse me.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Correction",
"sec_num": "3.1"
},
{
"text": "The underlined action note is accidentally included in the utterance as a part of the dialogue due to a missing parentheses, and the mention guy is consequently incorporated into the corpus. These malformations are fixed, and mentions included are removed from the corpus manually before disambiguation. The correction is necessary since the inclusion of action notes is inconsistent throughout the corpus, and they are removed to avoid confusion for our models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Correction",
"sec_num": "3.1"
},
{
"text": "Three labels are introduced to disambiguate Unknown mentions: General, Generic, and Other. Generic provides abstract groupings for unidentifiable entities, and each group is assigned a unique number for differentiation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Disambiguation",
"sec_num": "3.2"
},
{
"text": "\u2022 General: Mention used in reference to a general case (e.g., you 2,5 in Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Disambiguation",
"sec_num": "3.2"
},
{
"text": "\u2022 Generic: Mention referring to a unidentifiable entity (e.g., comedian 4,8 in Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Disambiguation",
"sec_num": "3.2"
},
{
"text": "\u2022 Other: Mention referred to insignificant singleton entity (e.g., dude 6 in Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Disambiguation",
"sec_num": "3.2"
},
{
"text": "We perform this disambiguation manually with two main guidelines: only mentions originally labeled Unknown are included, and the labels introduced above are provided to annotators in addition to the known entities. We limit the Generic mention groups to 5 per iteration of disambiguation for simplicity, and the scenes that requires more than 5 groups are recursively annotated until all unknowns are disambiguated. Unlike the previous work, our annotators are familiar with the TV show, and the task takes about 3 weeks to complete. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Disambiguation",
"sec_num": "3.2"
},
{
"text": "P S C G N O \u03a3 F1 5,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Disambiguation",
"sec_num": "3.2"
},
{
"text": "The task of character identification needs rich features extracted from mention clusters generated by a coreference resolution system. Thus, the end result of this task largely depends on the quality of the coreference resolution model. Several coreference resolution systems have been proposed and shown state-of-the-art performance (Pradhan et al., 2012); however, they are not necessarily designed for the genre of multiparty dialogue, where each document comprises utterances from multiple speakers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "4"
},
{
"text": "This section describes a novel approach to coreference resolution using Convolutional Neural Networks (CNN). Our model takes groups of features incorporating several dialogue aspects, feeds them into deep convolution layers, and dynamically generates mention embeddings and mention-pair embeddings, which are used to create the cluster embeddings that significantly improve the performance of our entity linking model (Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "4"
},
{
"text": "Our coreference resolution model, Agglomerative Convolutional Neural Network (ACNN), takes advantage of deep layers in CNN. The model is called agglomerative since it aggregates multiple feature groups into several convolution layers for the generation of mention and mention-pair embeddings. Each layer aims to consolidate and learn different combinations of the input features, and additional features are included at each layer. The unique nature of our model allows incremental feature aggregations to create more robust embeddings. Figure 1 illustrates the complete architecture of ACNN.",
"cite_spans": [],
"ref_spans": [
{
"start": 537,
"end": 545,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Agglomerative CNN",
"sec_num": "4.1"
},
{
"text": "The first part of the network learns the mention embedding for each of two mentions compared for a coreferent relation. Given two feature maps \u03c6 k e (m) and \u03c6 d (m) where m is a mention, \u03c6 k e (m) extracts the embedding features based on word embeddings, and \u03c6 d (m) extracts the discrete features (Table 3 ). The first convolution layer CONV k 1 with n-gram filters of size d is applied to each embedding group k, and the result from each filter is maxpooled to generate a feature vector \u2208 R 1\u00d7d . The second convolution layer CONV 2 is then applied to the 3D feature matrix \u2208 R n\u00d7d\u00d7k from the previous convolution layer on all embedding groups. The result of CONV 2 is max-pooled and concatenated with discrete features extracted by \u03c6 d (m) to form the mention embedding r s (m), defined as follows: ",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 306,
"text": "(Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Agglomerative CNN",
"sec_num": "4.1"
},
{
"text": "r s (m) = CONV 2 ( \uf8ee \uf8ef \uf8f0 CONV 1 1 (\u03c6 1 e (m)) . . . CONV k 1 (\u03c6 k e (m)) \uf8f9 \uf8fa \uf8fb) \u03c6 d (m)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative CNN",
"sec_num": "4.1"
},
{
"text": "r p (m i , m j ) = CONV 3 ( r s (m i ) r s (m j ) ) \u03c6 p (m i , m j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative CNN",
"sec_num": "4.1"
},
{
"text": "The learned mention-pair embedding is put through the hidden layer with the linear rectifier activation function (ReLu) before applying the sigmoid function \u03c3(m i , m j ) to determine the coreferent relation between mentions m i and m j , defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative CNN",
"sec_num": "4.1"
},
{
"text": "h(x) = ReLU(w h \u2022 x + b h ) \u03c3(m i , m j ) = sigmoid(w s \u2022 h(r p (m i , m j )) + b s )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative CNN",
"sec_num": "4.1"
},
{
"text": "The purpose of the sigmoid function \u03c3(m i , m j ) is twofold. For each mention m i , it performs binary classifications between m i and m j where j \u2208 [1, i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative CNN",
"sec_num": "4.1"
},
{
"text": "If max(\u03c3(m i , m j )) < 0.5, the model considers no coreferent relation between m i and any mention prior to it, and create a new cluster containing only m i s.t. m i becomes a singleton for the moment. If max(\u03c3(m i , m j )) \u2265 0.5, m i is put to the existing cluster C m k that m k belongs to, where m k is arg j max(\u03c3(m i , m j )). This formalism of mention clustering is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative CNN",
"sec_num": "4.1"
},
{
"text": "\u2022 If \u2200 1\u2264j<i . max(\u03c3(m i , m j )) < 0.5, then create a new cluster C m i . \u2022 If \u2203 1\u2264j<i . max(\u03c3(m i , m j )) \u2265 0.5, then C m k \u2190 C m k \u222a {m i }, where m k = arg j max(\u03c3(m i , m j ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative CNN",
"sec_num": "4.1"
},
{
"text": ". Table 3 shows feature templates used for our ACNN model. Sentence and utterance embeddings are the average vectors of all word embeddings in the sentence and utterance, respectively. Speaker embeddings are randomly generated using the Gaussian distribution. Gender and plurality information are from Bergsma and Lin (2006) , and word animacy is from Durrett and Klein (2013) . ",
"cite_spans": [
{
"start": 302,
"end": 324,
"text": "Bergsma and Lin (2006)",
"ref_id": "BIBREF1"
},
{
"start": 352,
"end": 376,
"text": "Durrett and Klein (2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 2,
"end": 9,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Agglomerative CNN",
"sec_num": "4.1"
},
{
"text": "For our experiments, word embeddings of dimension 50 are trained with FastText (Bojanowski et al., 2016) on the aggregation of New York Times, 4 Wikipedia, 5 and Amazon reviews. 6 The tanh activation function and a filter size of 280 is used for all convolution layers. A dropout rate of 0.8 is applied to all max-pooled convoluted results, and 2 regularization is applied to the sigmoid function. The hidden layer has the same dimension as the filter size. Binary labels of 0 and 1 are assigned to each mention-to-mention pair based on the gold cluster information. The model is trained on a mean squared error loss function with the RMSprop optimizer.",
"cite_spans": [
{
"start": 79,
"end": 104,
"text": "(Bojanowski et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 156,
"end": 157,
"text": "5",
"ref_id": null
},
{
"start": 178,
"end": 179,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Configuration",
"sec_num": "4.2"
},
{
"text": "Coreference resolution groups mentions into clusters; however, it does not assign character labels to the clusters, which is required for character identification. This section describes our entity linking model that takes the mention embeddings and the mention-pair embeddings generated ACNN and classifies each mention to one of the character labels (Figure 3 ). These embeddings are used to create cluster and cluster-mention embeddings through pooling, which give a significant improvement to character identification when included as features in our linker (Section 6).",
"cite_spans": [],
"ref_spans": [
{
"start": 352,
"end": 361,
"text": "(Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entity Linking",
"sec_num": "5"
},
{
"text": "Clusterm Embedding",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Embedding",
"sec_num": null
},
{
"text": "M c 1 M c n Clusterp Embedding M c t,i M c t,k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Embedding",
"sec_num": null
},
{
"text": "Avg. Pooling",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Embedding",
"sec_num": null
},
{
"text": "CONVs CONVp ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max Pooling",
"sec_num": null
},
{
"text": "R s (C m ) = [r s (m 1 ), r s (m 2 ), ..., r s (m |Cm| )] R p (C m , m) = [r p (m i , m) | m i = m]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max Pooling",
"sec_num": null
},
{
"text": "CONV s and CONV p are two separate convolution layers with unigram filters using the tanh activation. The results from these layers are max-pooled. The cluster embedding r s (C m ) and the mentioncluster embedding r p (C m , m) are defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max Pooling",
"sec_num": null
},
{
"text": "r s (C m ) = CONV s ( avg pool(R s (C m )) max pool(R s (C m )) ) r p (C m , m) = CONV p ( avg pool(R p (C m , m)) max pool(R p (C m , m)) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max Pooling",
"sec_num": null
},
{
"text": "The mention embedding, the cluster embedding, and the mention-cluster embedding are concatenated and fed into the network as input, and the scores of all character labels are activated as output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max Pooling",
"sec_num": null
},
{
"text": "A dropout layer of rate 0.8 is applied to all inputs. The model is trained as a multi-class classifier with the categorical cross-entropy loss function and the RMSprop optimizer. All hidden layers use the ReLU activation function and have the same number of hidden units as the dimension of the mention embeddings. The convolution layers use the same filter sizes as the dimensions of input embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Configuration",
"sec_num": "5.2"
},
{
"text": "Following Chen and Choi (2016) , experiments are conducted on two tasks, coreference resolution and Table 4 : Coreference resolution results on the evaluation set (in %). \u00b5 = (MUC + B 3 + CEAF e ) / 3. |C|: the average cluster size. entity linking. Our coreference resolution model shows robust performance compared to other stateof-the-art systems (Section 6.2). Our entity linking model significantly outperforms the heuristic-based approach from the previous work (Section 6.3). All models are evaluated on the gold mentions to focus purely on the analysis of these two tasks.",
"cite_spans": [
{
"start": 10,
"end": 30,
"text": "Chen and Choi (2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "The corpus is split into the training, development, and evaluation sets (Table 5 ). For the episode-level, all mentions referring to the same character in each episode are grouped into one cluster (C Epi ). For the scene-level, this grouping is done by each scene such that there can be multiple mention clusters that refer to the same character within an episode (C Sce ).",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 80,
"text": "(Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Data Split",
"sec_num": "6.1"
},
{
"text": "Ambiguous mention types such as collective, general, and other are excluded from our experiments (Section 3); including those mentions requires developing different resolution models that we shall explore in the future. , and evaluation (TST) sets. E/S/DC/C E /C S /M: the numbers of episodes, scenes, distinct characters, episode/scene-level clusters, and mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Split",
"sec_num": "6.1"
},
{
"text": "For entity linking, entity labels are predetermined by collecting characters that appear in all three sets; characters that do not appear in any of the three sets are put together and labeled as Unknown. This is reasonable because it is not possible for a statistical model to learn about characters that do not appear in the training set. Likewise, characters that appear in the training set but not in the other sets cannot be developed or evaluated. A total of ten labels are used for entity linking that consist of the top-9 most frequently appeared characters across all sets and unknown (Figure 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 593,
"end": 602,
"text": "(Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Split",
"sec_num": "6.1"
},
{
"text": "To benchmark the robustness of our ACNN model (Section 4), two state-of-the-art coreference resolution systems are also experimented. Episode and scene-level models are developed separately for all three systems using the same dataset in Table 5 . All system outputs are evaluated with the MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998) , and CEAF e (Luo, 2005) metrics suggested by the CoNLL'12 shared task (Pradhan et al., 2012) . The average score of five trials is reported for each metric to minimize variance because these systems use neural network approaches with random initialization to produce varying results per trial (Table 4) . Figure 3 : Character labels used for entity linking.",
"cite_spans": [
{
"start": 294,
"end": 315,
"text": "(Vilain et al., 1995)",
"ref_id": "BIBREF20"
},
{
"start": 322,
"end": 347,
"text": "(Bagga and Baldwin, 1998)",
"ref_id": "BIBREF0"
},
{
"start": 361,
"end": 372,
"text": "(Luo, 2005)",
"ref_id": "BIBREF11"
},
{
"start": 419,
"end": 441,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 642,
"end": 651,
"text": "(Table 4)",
"ref_id": null
},
{
"start": 654,
"end": 662,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "6.2"
},
{
"text": "Comparison between the State-of-the-Art When trained and evaluated on our dataset, both the Stanford (Clark and Manning, 2016) and the Harvard (Wiseman et al., 2016) systems give comparable results to their performance on the CoNLL'12 dataset. 7 The Stanford system using its pre-trained model gives the \u00b5 scores of 47.67% and 64.14% for the episode and scene-level respectively, which signifies the importance of the in-domain training data. Table 6 : Entity linking results on the evaluation set (in %). The F1 score is reported for each character. E/S: episode/scene level. Unk.: unknown. Avg: the macro-average F1 score between all characters. Acc: (the number of correctly labeled mentions) / (the total number of mentions).",
"cite_spans": [
{
"start": 101,
"end": 126,
"text": "(Clark and Manning, 2016)",
"ref_id": "BIBREF4"
},
{
"start": 143,
"end": 165,
"text": "(Wiseman et al., 2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 443,
"end": 450,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "6.2"
},
{
"text": "All systems show higher scores for the scene-level than the episode-level consistently, which confirms the difficulty of this task on larger documents. Although both systems take advantage of global cluster features, they reveal different strengths on resolving mentions with respect to the cluster size. The Stanford system excels for the episode-level, which is primarily attributed to the cluster-based nature of this system; it is able to find more accurate coreferent chains when the clusters are larger. The Harvard system performs best for the scene-level, indicating that its neural architecture with Long Short-Term Memory cells captures more meaningful cluster features when the clusters are smaller.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "6.2"
},
{
"text": "In comparison to the other state-of-the-art systems, our ACNN model shows competitive performance; it gives the highest B 3 and comparable \u00b5 scores for both episode and scene levels. We measure the average cluster size produced by each system for further analysis (|C| in Table 4 ). The Harvard system produces smaller clusters than the other two systems. Such a tendency gives more pure clusters, favored by the CEAF e metric for the scenelevel. However, it is prone to breaking up too many links, which leads to poor performance in the B 3 evaluation on the episode-level.",
"cite_spans": [],
"ref_spans": [
{
"start": 272,
"end": 279,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison to Agglomerative CNN",
"sec_num": null
},
{
"text": "The performance of our model is encouraging although coreference resolution is not the end goal. We design this model to automatically generate mention embeddings and mention-pair embeddings that are used to construct cluster features for entity linking. However, even though this model's success in coreference resolution is not our final objective, its success directly correlates to the success of entity linking because of the similarity between these two tasks. Due to the similar nature of these two tasks, the success of coreference resolution directly correlates to that of entity linking. These embed-dings are the essence of our entity linking model, leading to a huge improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Agglomerative CNN",
"sec_num": null
},
{
"text": "The heuristic-based approach proposed by Chen and Choi (2016) is adapted to establish the baseline. Two statistical models are experimented for both the episode and scene levels, one using only mention embeddings and the other using both mention embeddings and cluster embeddings (Section 5). All models are evaluated with the F1 scores of character labels, the macro-average F1 scores between all labels, and the label accuracies. The average scores of five trials are reported in Table 6 .",
"cite_spans": [
{
"start": 41,
"end": 61,
"text": "Chen and Choi (2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 482,
"end": 489,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entity Linking",
"sec_num": "6.3"
},
{
"text": "The heuristic-based approach is applied to the mention clusters found by our coreference resolution model. Two rules, 1) proper noun and 2) first-person pronoun matches, are used to assign character labels to all mentions. The label of each cluster is then determined by the majority vote between the mention labels within the cluster. Finally, the cluster label is assigned to all mentions in that cluster. This model performs better when it is applied to the episode-level clusters because larger clusters provide more mention labels, which makes the majority vote more reliable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B: Baseline Model",
"sec_num": null
},
{
"text": "This model takes advantage of the mention embeddings generated by our ACNN model. Compared to the baseline, it gives over a 21% higher average F1 score, and over a 15% higher label accuracy for the episode and the scene levels, respectively. Interestingly, this model shows higher performance for the scene-level, which is not the case for the other two models. This implies that the mention embeddings learned from scene-level documents are more informative than those learned from episode-level ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ME: Mention Embedding Model",
"sec_num": null
},
{
"text": "In this paper, we explore a relatively new task, character identification on multiparty dialogues, and introduce a novel perspective on approaching the task with coreference resolution and entity linking. We improve and augment finer-grained annotation over the existing corpus that simulates real conversations. We propose a deep convolutional neural network to agglomerate groups of features into mention, mention-pair, cluster, and mention-cluster embeddings that are optimized for entity prediction. Our coreference resolution result shows an improvement on the updated version of the corpus. Our entity linking result reaches to the accuracy that is sufficient for real-world applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ME: Mention Embedding Model",
"sec_num": null
},
{
"text": "To the best of our knowledge, our work is the first time that such deep convolution layers have been used for training mention and cluster embeddings. Our results show that the generation of these embeddings is crucial for the success of entity linking on multiparty dialogues. For future work, we will continue to increase the size of the corpus with high-quality and disambiguated annotation. We also wish to improve the embeddings to represent plural and collective mentions, thus we can build upon our entity linking model incorporating manyto-many linkings between entities and mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ME: Mention Embedding Model",
"sec_num": null
},
{
"text": "The dialogues are extracted from TV show transcripts by the previous work(Chen and Choi, 2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "nlp.mathcs.emory.edu/character-mining/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This task is known as 'Wikification'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "catalog.ldc.upenn.edu/ldc2008t19 5 dumps.wikimedia.org/enwiki/ 6 snap.stanford.edu/data/web-Amazon.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Stanford and the Harvard systems reported \u00b5 scores of 65.73% and 64.21% on the CoNLL'12 dataset, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This case is also reflected on its coreference resolution performance where the scene-level scores are higher than the episode-level scores (Table 4) .",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 149,
"text": "(Table 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "While the mention embeddings give a significant improvement over the baseline, further improvement is made when they are coupled with the cluster and mention-cluster embeddings. The episodelevel cluster embedding model shows an average F1 score of 86.76% and a label accuracy of 95.30%, which is another 15% improvement, suggesting a practical use of this model in real applications. A couple of important observations are made:\u2022 Cluster and mention-cluster embeddings, although learned during coreference resolution, are crucial for entity linking such that a coreference resolution model specifically designed for multiparty dialogues is necessary to build the state-of-the-art entity linking model for this genre.\u2022 Clusters generated from the episode-level documents provide more information than those from the scene-level do, which aligns with the conclusion made by Chen and Choi (2016) .",
"cite_spans": [
{
"start": 872,
"end": 892,
"text": "Chen and Choi (2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CE: Cluster Embedding Model",
"sec_num": null
},
{
"text": "An error analysis is performed on the episode-level cluster embedding model. From the confusion matrix in Table 7 , two common system errors are detected. First, most of the mispredictions identify Unknown as specific characters. Second, the performance on the secondary characters, Carol, Mindy, and Barry, is subpar with respect to other entities. This subpar performance likely stems from a paucity of appearances by these secondary characters. For example, Mindy constitutes 1% of the dataset (Figure 3 ) and has only nine occurrences in the evaluation set. Our best model is robust in identifying the primary characters, showing an average F1 score of 96.38% and an accuracy of 98.42% on the evaluation set.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 113,
"text": "Table 7",
"ref_id": null
},
{
"start": 497,
"end": 506,
"text": "(Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "The first international conference on language resources and evaluation workshop on linguistics coreference. Citeseer",
"volume": "1",
"issue": "",
"pages": "563--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first interna- tional conference on language resources and evalu- ation workshop on linguistics coreference. Citeseer, volume 1, pages 563-566.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bootstrapping path-based pronoun resolution",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Associa- tion for Computational Linguistics. Association for Computational Linguistics, Sydney, Australia, pages 33-40.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.04606"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606 .",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Character identification on multiparty conversation: Identifying mentions of characters in tv shows",
"authors": [
{
"first": "Yu-Hsin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jinho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "90--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu-Hsin Chen and Jinho D. Choi. 2016. Character identification on multiparty conversation: Identify- ing mentions of characters in tv shows. In Proceed- ings of the 17th Annual Meeting of the Special Inter- est Group on Discourse and Dialogue. Association for Computational Linguistics, Los Angeles, pages 90-100. http://www.aclweb.org/anthology/W16-",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Deep reinforcement learning for mention-ranking coreference models",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2256--2262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics, Austin, Texas, pages 2256-2262. https://aclweb.org/anthology/D16-1245.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Easy victories and uphill battles in coreference resolution",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing. Association for Compu- tational Linguistics, Seattle, Washington.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Capturing semantic similarity for entity linking with convolutional neural networks",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Francis-Landau",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1256--1261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Francis-Landau, Greg Durrett, and Dan Klein. 2016. Capturing semantic similarity for entity linking with convolutional neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies. Association for Computational Linguis- tics, San Diego, California, pages 1256-1261. http://www.aclweb.org/anthology/N16-1150.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Entity extraction, linking, classification, and tagging for social media: A wikipedia-based approach",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Gattani",
"suffix": ""
},
{
"first": "Digvijay",
"middle": [
"S"
],
"last": "Lamba",
"suffix": ""
},
{
"first": "Nikesh",
"middle": [],
"last": "Garera",
"suffix": ""
},
{
"first": "Mitul",
"middle": [],
"last": "Tiwari",
"suffix": ""
},
{
"first": "Xiaoyong",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "Sanjib",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Sri",
"middle": [],
"last": "Subramaniam",
"suffix": ""
},
{
"first": "Anand",
"middle": [],
"last": "Rajaraman",
"suffix": ""
},
{
"first": "Venky",
"middle": [],
"last": "Harinarayan",
"suffix": ""
},
{
"first": "Anhai",
"middle": [],
"last": "Doan",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. VLDB Endow",
"volume": "6",
"issue": "11",
"pages": "1126--1137",
"other_ids": {
"DOI": [
"10.14778/2536222.2536237"
]
},
"num": null,
"urls": [],
"raw_text": "Abhishek Gattani, Digvijay S. Lamba, Nikesh Garera, Mitul Tiwari, Xiaoyong Chai, Sanjib Das, Sri Subramaniam, Anand Rajaraman, Venky Harinarayan, and AnHai Doan. 2013. En- tity extraction, linking, classification, and tag- ging for social media: A wikipedia-based ap- proach. Proc. VLDB Endow. 6(11):1126-1137. https://doi.org/10.14778/2536222.2536237.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "To Link or Not to Link? A Study on Endto-End Tweet Entity Linking",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Emre",
"middle": [],
"last": "Kiciman",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. NAACL",
"volume": "",
"issue": "",
"pages": "1020--1030",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Guo, Ming-Wei Chang, and Emre Kiciman. 2013. To Link or Not to Link? A Study on End- to-End Tweet Entity Linking. In Proceedings of the Conference of the North American Chapter of the As- sociation for Computational Linguistics on Human Language Technology. NAACL, pages 1020-1030.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep neural network approach for the dialog state tracking challenge",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the SIGDIAL 2013 Conference",
"volume": "",
"issue": "",
"pages": "467--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Henderson, Blaise Thomson, and Steve Young. 2013. Deep neural network approach for the dialog state tracking challenge. In Proceed- ings of the SIGDIAL 2013 Conference. Association for Computational Linguistics, Metz, France, pages 467-471. http://www.aclweb.org/anthology/W13- 4073.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Fourth Dialog State Tracking Challenge",
"authors": [
{
"first": "Seokhwan",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Fernandodharo",
"suffix": ""
},
{
"first": "Rafael",
"middle": [
"E"
],
"last": "Banchs",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 4th Dialog State Tracking Challenge. DSTC4",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seokhwan Kim, Luis FernandoDHaro, Rafael E. Banchs, Jason D. Williams, and Matthew Hender- son. 2015. The Fourth Dialog State Tracking Chal- lenge. In Proceedings of the 4th Dialog State Track- ing Challenge. DSTC4.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "On coreference resolution performance metrics",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of the confer- ence on Human Language Technology and Empiri- cal Methods in Natural Language Processing. Asso- ciation for Computational Linguistics, pages 25-32.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Wikify!: Linking Documents to Encyclopedic Knowledge",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Andras",
"middle": [],
"last": "Csomai",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Sixteenth ACM Conference on Conference on Information and Knowledge Management. CIKM'07",
"volume": "",
"issue": "",
"pages": "233--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Andras Csomai. 2007a. Wikify!: Linking Documents to Encyclopedic Knowledge. In Proceedings of the Sixteenth ACM Conference on Conference on Information and Knowledge Manage- ment. CIKM'07, pages 233-242.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Wikify!: Linking documents to encyclopedic knowledge",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Andras",
"middle": [],
"last": "Csomai",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Sixteenth ACM Conference on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "233--242",
"other_ids": {
"DOI": [
"10.1145/1321440.1321475"
]
},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Andras Csomai. 2007b. Wik- ify!: Linking documents to encyclopedic knowledge. In Proceedings of the Sixteenth ACM Conference on Conference on Informa- tion and Knowledge Management. ACM, New York, NY, USA, CIKM '07, pages 233-242. https://doi.org/10.1145/1321440.1321475.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The DARE Corpus: A Resource for Anaphora Resolution in Dialogue Based Intelligent Tutoring Systems",
"authors": [
{
"first": "B",
"middle": [],
"last": "Nobal",
"suffix": ""
},
{
"first": "Vasile",
"middle": [],
"last": "Niraula",
"suffix": ""
},
{
"first": "Rajendra",
"middle": [],
"last": "Rus",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Banjade",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Stefanescu",
"suffix": ""
},
{
"first": "Brent",
"middle": [],
"last": "Baggett",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Morgan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation. LREC'14",
"volume": "",
"issue": "",
"pages": "3199--3203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nobal B. Niraula, Vasile Rus, Rajendra Banjade, Dan Stefanescu, William Baggett, and Brent Morgan. 2014. The DARE Corpus: A Resource for Anaphora Resolution in Dialogue Based Intelligent Tutoring Systems. In Proceedings of the Ninth International Conference on Language Resources and Evaluation. LREC'14, pages 3199-3203.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Joint Framework for Coreference Resolution and Mention Head Detection",
"authors": [
{
"first": "Haoruo",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th Conference on Computational Natural Language Learning. CoNLL'15",
"volume": "",
"issue": "",
"pages": "12--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoruo Peng, Kai-Wei Chang, and Dan Roth. 2015. A Joint Framework for Coreference Resolution and Mention Head Detection. In Proceedings of the 9th Conference on Computational Natural Language Learning. CoNLL'15, pages 12-21.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Sixteenth Conference on Computational Natural Language Learning: Shared Task. CoNLL'12",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 Shared Task: Modeling Multilingual Unre- stricted Coreference in OntoNotes. In Proceedings of the Sixteenth Conference on Computational Nat- ural Language Learning: Shared Task. CoNLL'12, pages 1-40.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Local and Global Algorithms for Disambiguation to Wikipedia",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. ACL'11",
"volume": "",
"issue": "",
"pages": "1375--1384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Ratinov, Dan Roth, Doug Downey, and Mike An- derson. 2011a. Local and Global Algorithms for Disambiguation to Wikipedia. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies. ACL'11, pages 1375-1384.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Local and global algorithms for disambiguation to wikipedia",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Anderson",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Ratinov, Dan Roth, Doug Downey, and Mike Anderson. 2011b. Local and global algo- rithms for disambiguation to wikipedia. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguis- tics: Human Language Technologies -Volume",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Coreference Resolution in Dialogues in English and Portuguese",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Rocha",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Workshop on Coreference and Its Applications. CorefApp'99",
"volume": "",
"issue": "",
"pages": "53--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Rocha. 1999. Coreference Resolution in Dia- logues in English and Portuguese. In Proceedings of the Workshop on Coreference and Its Applications. CorefApp'99, pages 53-60.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A modeltheoretic coreference scoring scheme",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Aberdeen",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 6th conference on Message understanding. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "45--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Pro- ceedings of the 6th conference on Message under- standing. Association for Computational Linguis- tics, pages 45-52.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning anaphoricity and antecedent ranking features for coreference resolution",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Shieber",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Wiseman, Alexander M. Rush, Stuart Shieber, and Jason Weston. 2015. Learning anaphoric- ity and antecedent ranking features for corefer- ence resolution. In Proceedings of the 53rd",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1416--1426",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers). Association for Computa- tional Linguistics, Beijing, China, pages 1416-1426. http://www.aclweb.org/anthology/P15-1137.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning global features for coreference resolution",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Stuart",
"middle": [
"M"
],
"last": "Shieber",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "994--1004",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coref- erence resolution. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies. Association for Computational Linguistics, San Diego, California, pages 994-1004. http://www.aclweb.org/anthology/N16-1114.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A deep learning framework for coreference resolution based on convolutional neural network",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Wu",
"suffix": ""
},
{
"first": "W",
"middle": [
"Y"
],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE 11th International Conference on Semantic Computing (ICSC)",
"volume": "",
"issue": "",
"pages": "61--64",
"other_ids": {
"DOI": [
"10.1109/ICSC.2017.57"
]
},
"num": null,
"urls": [],
"raw_text": "J. L. Wu and W. Y. Ma. 2017. A deep learn- ing framework for coreference resolution based on convolutional neural network. In 2017 IEEE 11th International Conference on Semantic Computing (ICSC). pages 61-64. https://doi.org/10.1109/ICSC.2017.57.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The overview of our agglomerative convolutional neural network.The second part of the network utilizes the learned mention embedding r s (m) to create the mentionpair embedding. Another feature map \u03c6 p (m i , m j ) is defined to extract pairwise features between mentions m i and m j (Table 3). The third convolution layer CONV 3 is applied to the stacked mention embeddings, r s (m i ) and r s (m j ). The result is maxpooled and concatenated with the pairwise features extracted by \u03c6 p (m i , m j ) to form the mention-pair embedding r p (m i , m j ), defined as follows:",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "info. of all words in m Avg. plurality info. of all words in m Avg. word animacy of all words in m Embedding of the current speaker Embeddings of the previous 2 speakers \u03c6 p (m i , m j ) Exact string match between m i and m j Relaxed string match between m i and m j Speaker match between m i and m j Mention distance between m i and m j Sentence distance between m i and m j",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "The overview of our entity linking model. Cluster m and Cluster p embeddings are derived from mention and mention-pair embeddings, resp.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "illustrates our entity linking model based on a feed-forward neural network with two hidden layers. For each mention m, the model takes the mention embedding r s (m) and two cluster embeddings derived from mention embeddings and mention-pair embeddings within the cluster C(m) (Section 5.2) and classifies m into one of the entity labels using the Softmax regression.5.1 Cluster EmbeddingTwo types of cluster embeddings are derived to capture cluster information. Given a mention m and its cluster C m , cluster embedding R s (C m ) represents the collective mention embedding of all mentions within C m , and mention-cluster embedding R p (C m , m) represents the collective mention-pair embedding between m and all the other mentions in C m that are compared to m during coreference resolution (\u2200 i . m i \u2208 C m ):",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"text": "Counts of disambiguated mentions. P/S: main and secondary character entities. C/G/N/O: Collective/General/Generic/Other.",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF2": {
"text": "Complete feature templates for ACNN.",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF5": {
"text": "The training (TRN), development(DEV)",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}