ACL-OCL / Base_JSON /prefixA /json /aacl /2020.aacl-main.22.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:13:20.201973Z"
},
"title": "Named Entity Recognition in Multi-level Contexts",
"authors": [
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Chuhan",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": "wuchuhan15@gmail.com"
},
{
"first": "Tao",
"middle": [],
"last": "Qi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Zhigang",
"middle": [],
"last": "Yuan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": "zgyuan@tsinghua.edu.cn"
},
{
"first": "Yongfeng",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": "yfhuang@tsinghua.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Named entity recognition is a critical task in the natural language processing field. Most existing methods for this task can only exploit contextual information within a sentence. However, their performance on recognizing entities in limited or ambiguous sentence-level contexts is usually unsatisfactory. Fortunately, other sentences in the same document can provide supplementary document-level contexts to help recognize these entities. In addition, words themselves contain word-level contextual information since they usually have different preferences of entity type and relative position from named entities. In this paper, we propose a unified framework to incorporate multi-level contexts for named entity recognition. We use TagLM as our basic model to capture sentence-level contexts. To incorporate document-level contexts, we propose to capture interactions between sentences via a multi-head self attention network. To mine word-level contexts, we propose an auxiliary task to predict the type of each word to capture its type preference. We jointly train our model in entity recognition and the auxiliary classification task via multi-task learning. The experimental results on several benchmark datasets validate the effectiveness of our method. Sentence 1 When Fred was still in High School he set up a business with his mother called Elizabeth Trump (PER) and Son. \u00d7 When Fred was still in High School he set up a business with his mother called Elizabeth Trump and Son (ORG). Russ Berrie and Co Inc (ORG) said on Friday that A. Curts Cooke (PER) will retire as chief operating officer. Russ Berrie and Co Inc (PER) said on Friday that A. Curts Cooke (PER) will retire as chief operating officer. \u00d7 Sentence 2 When Fred was still in high school he set up a business with his mother called Elizabeth Trump and Son. While in college, Donald Trump (PER) began his first real estate career at his father's company, Elizabeth Trump and Son (ORG). Action Performance Cos Inc (ORG) said Friday it has agreed to acquire Motorsport Traditions Ltd (ORG) and Creative Marketing & Promotions Inc (ORG) for aboud $13 million in cash and stock. \u2026\u2026 Place Dome Inc (ORG) too was considered unlikely because it is focusing on geographic expansion in areas that \u2026\u2026 Russ Berrie and Co Inc said on Friday that A. Curts Cooke will retire as chief operating officer.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Named entity recognition is a critical task in the natural language processing field. Most existing methods for this task can only exploit contextual information within a sentence. However, their performance on recognizing entities in limited or ambiguous sentence-level contexts is usually unsatisfactory. Fortunately, other sentences in the same document can provide supplementary document-level contexts to help recognize these entities. In addition, words themselves contain word-level contextual information since they usually have different preferences of entity type and relative position from named entities. In this paper, we propose a unified framework to incorporate multi-level contexts for named entity recognition. We use TagLM as our basic model to capture sentence-level contexts. To incorporate document-level contexts, we propose to capture interactions between sentences via a multi-head self attention network. To mine word-level contexts, we propose an auxiliary task to predict the type of each word to capture its type preference. We jointly train our model in entity recognition and the auxiliary classification task via multi-task learning. The experimental results on several benchmark datasets validate the effectiveness of our method. Sentence 1 When Fred was still in High School he set up a business with his mother called Elizabeth Trump (PER) and Son. \u00d7 When Fred was still in High School he set up a business with his mother called Elizabeth Trump and Son (ORG). Russ Berrie and Co Inc (ORG) said on Friday that A. Curts Cooke (PER) will retire as chief operating officer. Russ Berrie and Co Inc (PER) said on Friday that A. Curts Cooke (PER) will retire as chief operating officer. \u00d7 Sentence 2 When Fred was still in high school he set up a business with his mother called Elizabeth Trump and Son. While in college, Donald Trump (PER) began his first real estate career at his father's company, Elizabeth Trump and Son (ORG). Action Performance Cos Inc (ORG) said Friday it has agreed to acquire Motorsport Traditions Ltd (ORG) and Creative Marketing & Promotions Inc (ORG) for aboud $13 million in cash and stock. \u2026\u2026 Place Dome Inc (ORG) too was considered unlikely because it is focusing on geographic expansion in areas that \u2026\u2026 Russ Berrie and Co Inc said on Friday that A. Curts Cooke will retire as chief operating officer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named Entity Recognition (NER) is defined as automatically identifying and classifying named entities into specific categories (e.g., person, location, organization) in text. It is a critical task in Natural Language Processing (NLP) and a prerequisite for many downstream tasks, such as entity linking (Luo et al., 2015) , relation extraction (Feldman and Rosenfeld, 2006) and question answering (Lee et al., 2006) .",
"cite_spans": [
{
"start": 303,
"end": 321,
"text": "(Luo et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 344,
"end": 373,
"text": "(Feldman and Rosenfeld, 2006)",
"ref_id": "BIBREF9"
},
{
"start": 397,
"end": 415,
"text": "(Lee et al., 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "NER is usually modeled as a sentence-level sequence labeling task in previous work. For example, Lample et al. (2016) used long-short term Figure 1 : Examples of document-and word-level contextual evidence. Blue italic and red underlined entities are the names of organizations and persons respectively. Green and orange arrows indicate the document-and word-level contextual evidence respectively. memory (LSTM) (Gers et al., 2000) for capturing contextual word representations and conditional random fieid (CRF) (Lafferty et al., 2001 ) for jointly label decoding. In recent years, language models (LMs) were introduced to this task to learn better contextual representations of words (Peters et al., 2017 (Peters et al., , 2018 Devlin et al., 2019) . However, these methods only consider the contexts within a sentence, which is insufficient.",
"cite_spans": [
{
"start": 97,
"end": 117,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 413,
"end": 432,
"text": "(Gers et al., 2000)",
"ref_id": "BIBREF10"
},
{
"start": 514,
"end": 536,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF15"
},
{
"start": 687,
"end": 707,
"text": "(Peters et al., 2017",
"ref_id": "BIBREF26"
},
{
"start": 708,
"end": 730,
"text": "(Peters et al., , 2018",
"ref_id": "BIBREF27"
},
{
"start": 731,
"end": 751,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 139,
"end": 147,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work is motivated by the observation that the contextual information beyond sentences can mitigate the negative effects of the ambiguous and limited sentence contexts. The sentences within a document are highly related, and the interactions between them can provide document-level contextual information. For example, in Figure 1 , sentence 1 is ambiguous because it can be either his mother called Elizabeth Trump or a business called Elizabeth Trump and Son. But another sentence in this document explicitly mentions Elizabeth Trump and Son as a company's name and solves the ambiguity. Besides, words themselves contain prefer-ences of entity type and relative position from the entities, and the preferences provide word-level contextual information. For instance, the sentence 2 in Figure 1 has limited contexts, and the word said can easily mislead the classification of the type of Co Inc. However, the multiple mentions of Inc in other sentences indicate its preference to appear as the last word of organizations. Thus, these preferences of words have the potential to help recognize entity types more correctly.",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 333,
"text": "Figure 1",
"ref_id": null
},
{
"start": 791,
"end": 799,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a unified framework for NER to incorporate multi-level contexts. Our framework is based on TagLM (Peters et al., 2017) , which captures morphological and sentence-level contextual information with two-layer bidirectional gated recurrent units (BiGRUs) (Chung et al., 2014) . We apply the neural attention mechanism (Bahdanau et al., 2014) to the hidden states of TagLM's bottom BiGRU to learn sentence representations, and contextualize them with a sentencelevel BiGRU. To mine document-level contexts, we propose to apply the multi-head self attention mechanism (Vaswani et al., 2017) to the sentencelevel BiGRU's hidden states to capture the relations between sentences. To fuse the document-level context, we combine the output document representations of the self attention module with the corresponding sentence's bottom hidden states and feed them into TagLM's top BiGRU. Besides, to mine word-level contextual information, we propose an auxiliary word classifier to predict the probability distributions of word labels because the label distributions describe the type and position preferences of words. The auxiliary word classification task is jointly trained with our NER model via multi-task learning. We concatenate the top BiGRU's output representations with the output probability vectors of the word classifier to fuse the word-level context and feed them into a CRF for sequence decoding.",
"cite_spans": [
{
"start": 123,
"end": 144,
"text": "(Peters et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 278,
"end": 298,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 341,
"end": 364,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 589,
"end": 611,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose to fuse multi-level contexts for the NER task with a unified framework. \u2022 We propose to exploit the document-level context by capturing the interactions between sentences within a document with the multi-head self attention mechanism. \u2022 We propose to mine the word-level context with an auxiliary word classification task to learn the words' preferences of entity type and relative position from the entities. \u2022 We conduct experiments on several bench-mark datasets, and the results validate the effectiveness of our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In traditional NER methods, contexts are usually modeled via hand-crafted features. For example, Passos et al. (2014) trained phrase vectors in their lexicon-infused skip-gram model. Lin and Wu (2009) used a linear chain CRF and added phrase cluster features extracted from the web data. However, these methods require heavy feature engineering, which necessities massive domain knowledge. In addition, these methods cannot make full use of contextual information within texts.",
"cite_spans": [
{
"start": 97,
"end": 117,
"text": "Passos et al. (2014)",
"ref_id": "BIBREF25"
},
{
"start": 183,
"end": 200,
"text": "Lin and Wu (2009)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In recent years, many neural networks were applied to the NER task. Collobert et al. (2011) first adopted CNNs to learn word representations. Recently, BiLSTM was widely used for long distance context modeling (Chiu and Nichols, 2016; Lample et al., 2016; Ma and Hovy, 2016) . Additionally, Chiu and Nichols (2016) employed CNNs to capture morphological word representations; Lample et al. 2016utilized CRF to model the dependencies between adjacent tags; Ma and Hovy (2016) proposed LSTM-CNNs-CRF model to combine the strengths of these components. Besides, Strubell et al. (2017) proposed iterated-dilated CNNs for higher efficiency than BiLSTM and better capacity with large context than vanilla CNNs. Recent work proved that the context-sensitive representations captured by language models are useful in NER systems. Peters et al. (2017) proposed TagLM model and introduced LM embeddings in this task. Afterwards, ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) were proposed for better contextual representations. However, these methods focused only on the context within a sentence, so their performance is substantially hurt by the ambiguity and limitation of sentence context.",
"cite_spans": [
{
"start": 68,
"end": 91,
"text": "Collobert et al. (2011)",
"ref_id": "BIBREF7"
},
{
"start": 210,
"end": 234,
"text": "(Chiu and Nichols, 2016;",
"ref_id": "BIBREF4"
},
{
"start": 235,
"end": 255,
"text": "Lample et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 256,
"end": 274,
"text": "Ma and Hovy, 2016)",
"ref_id": "BIBREF21"
},
{
"start": 456,
"end": 474,
"text": "Ma and Hovy (2016)",
"ref_id": "BIBREF21"
},
{
"start": 559,
"end": 581,
"text": "Strubell et al. (2017)",
"ref_id": "BIBREF32"
},
{
"start": 822,
"end": 842,
"text": "Peters et al. (2017)",
"ref_id": "BIBREF26"
},
{
"start": 924,
"end": 945,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF27"
},
{
"start": 955,
"end": 976,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To combine contexts beyond sentences, several methods were proposed to mine document-level information, such as logical rules (Mikheev et al., 1999) , global attention (Xu et al., 2018; Zhang et al., 2018; Hu et al., 2020) and memory mechanisms (Gui et al., 2020) . But these methods ignored the sequential characteristics of the sentences within a document, which may be sub-optimal. We observe that contextual associations between sentences in a document have the potential of improving the NER performance. Moreover, the words' preferences of entity type and relative position from the entities Figure 2 : Overview of our multi-level context framework. The character representation is captured with a twolayer BiGRU. The document representation is captured with the multi-head self attention mechanism. The word label distribution is predicted by a two-layer neural network.",
"cite_spans": [
{
"start": 126,
"end": 148,
"text": "(Mikheev et al., 1999)",
"ref_id": "BIBREF22"
},
{
"start": 168,
"end": 185,
"text": "(Xu et al., 2018;",
"ref_id": "BIBREF34"
},
{
"start": 186,
"end": 205,
"text": "Zhang et al., 2018;",
"ref_id": "BIBREF36"
},
{
"start": 206,
"end": 222,
"text": "Hu et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 245,
"end": 263,
"text": "(Gui et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 598,
"end": 606,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "O S O O O S O S O O S O ( ) \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 ( ) \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 Multi-head Self Attention \u2026 B O S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "contain word-level contextual information, which is ignored by most previous work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Based on these observations, we propose a unified framework to combine multi-level contexts in this paper. Our framework is based on the TagLM model, which captures sentence-level context with two stacked BiGRUs and models tag dependencies with CRF. To exploit the document-level context, we propose to capture the interactions between sentences within a document with multi-head self attention mechanism (Vaswani et al., 2017) . Besides, to mine the word-level context, we propose an auxiliary word classification task to encode the words' type and position preferences. We train our model in the NER and the auxiliary task via multitask learning. We conduct experiments on several benchmark datasets, and the results demonstrate the effectiveness of multi-level contexts.",
"cite_spans": [
{
"start": 405,
"end": 427,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we will introduce our approach in detail. The overall framework of our approach is shown in Figure 2 . We will first briefly introduce the basic model in our approach, then introduce how to incorporate document-and word-level contexts into our model.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 117,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3"
},
{
"text": "We choose TagLM (Peters et al., 2017) as our basic model. TagLM first captures character-level information of words because named entities usually have specific morphological patterns. For example, China refers to the country in most cases, while china mostly refers to porcelains. Therefore, given a sentence of words w 1 , w 2 , . . . , w n , TagLM learns morphological information with a two-layer BiGRU, as shown in Figure 2 . It takes the character embeddings (whose dimension denoted as d ce ) as input, and the last output hidden state is adopted as character representation c k . Then we concatenate c k with a word embedding w k to construct contextindependent representation x k for each word:",
"cite_spans": [
{
"start": 16,
"end": 37,
"text": "(Peters et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 420,
"end": 428,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline NER model",
"sec_num": "3.1"
},
{
"text": "c k = BiGRU(w k ; \u03b8 c ) \u2208 R d ch w k = E(w k ; \u03b8 w ) \u2208 R dwe x k = [c k ; w k ] \u2208 R dwe+d ch (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline NER model",
"sec_num": "3.1"
},
{
"text": "The word embedding w k is obtained by looking up a pre-trained embedding matrix \u03b8 w , which is fine-tuned during training (Collobert et al., 2011) .",
"cite_spans": [
{
"start": 122,
"end": 146,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline NER model",
"sec_num": "3.1"
},
{
"text": "To learn context-sensitive word representations, TagLM applies two layers of BiGRUs on [x 1:n ]. Then the pre-trained LM embeddings are concatenated with the hidden states of the bottom BiGRU. We denote the output of the bottom and the top BiGRU as h word",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline NER model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k \u2208 R d sh and h seq k \u2208 R d sqh : h word k = BiGRU(x k ), h seq k = BiGRU([h word k ; LM k ]).",
"eq_num": "(2)"
}
],
"section": "Baseline NER model",
"sec_num": "3.1"
},
{
"text": "Finally, we feed [h seq 1:n ] into a linear-chain CRF to model the correlations between labels in neighbor-hoods and jointly decode the best label sequence. The probabilistic model for linear CRF defines a family of conditional probability p(y|z; \u03b8) over all possible label sequences y given z:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline NER model",
"sec_num": "3.1"
},
{
"text": "p(y|z; \u03b8) = n i=1 \u03c8 i (y i\u22121 , y i , z) y \u2208Y(z) n i=1 \u03c8 i (y i\u22121 , y i , z) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline NER model",
"sec_num": "3.1"
},
{
"text": "where \u03c8 i (y , y, z) = exp(W y ,y z i + b y ,y ) are potential functions, and W y ,y , b y ,y are parameters of the CRF. Following Lafferty et al. 2001and Collobert et al. (2011) , we utilize the sentence CRF loss for training, which is formulated as the negative log-likelihood:",
"cite_spans": [
{
"start": 155,
"end": 178,
"text": "Collobert et al. (2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline NER model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L CRF = \u2212 i log p(y|z; \u03b8)",
"eq_num": "(4)"
}
],
"section": "Baseline NER model",
"sec_num": "3.1"
},
{
"text": "We compute the likelihood using the forwardbackward algorithm at the training phase, and use the Viterbi algorithm to find the most likely label sequence at the test phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline NER model",
"sec_num": "3.1"
},
{
"text": "Sentences within a document are highly correlated, and these correlations provide contextual information at the document level. For example, in the document \"Jason Little is a rugby union player. Little won 75 caps as captain\", the second sentence is ambiguous because it can also mean \"Hardly any person won 75 caps as captain\". In this case, the first sentence in this document explicitly mentions Jason Little as a player. The interaction between the two sentences helps to solve this ambiguity. Therefore, we capture and fuse the document-level context as follows. To capture the document-level context, we first obtain the context-independent sentence representations. Since each word in a sentence has different importance (e.g. a contributes less information than player in \"Jason Little is a rugby union player.\") , we apply the neural attention mechanism (Bahdanau et al., 2014) to filter the uninformative words and learn better sentence representations. Then we contextualize these representations with a sentencelevel BiGRU. Formally,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Context",
"sec_num": "3.2"
},
{
"text": "\u03b1 k = softmax(u w \u2022 tanh(W a h word k + b a )) s i = n k=1 \u03b1 k h word ik h sen i = BiGRU(s i ) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Context",
"sec_num": "3.2"
},
{
"text": "where W a \u2208 R dna\u00d7d wh , b a \u2208 R dna , u w \u2208 R dna are the parameters of the neural attention module.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Context",
"sec_num": "3.2"
},
{
"text": "Next, we propose to capture the interactions between sentences with the multi-head self attention mechanism (Vaswani et al., 2017) . In most existing attention mechanisms, a sentence's attention weight is only based on its representation, and the relationships between sentences cannot be modeled. Self attention is an effective way to capture the interactions between sentences. Besides, a sentence may interact with multiple sentences. For example, in the document \"LeBron James is a basketball player for the Lakers. In 2016 James won the championship of NBA. In 2018 he signed with the Lakers\", the first sentence interacts with the remaining two sentences simultaneously because they jointly mention James and Lakers respectively. Thus, we propose to apply the multi-head self attention mechanism to learn better representations of sentences by modeling their relationship with multiple sentences. We first project the sentence hidden states into the h-th sub-space, and calculate the attention weights in this sub-space:",
"cite_spans": [
{
"start": 108,
"end": 130,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Context",
"sec_num": "3.2"
},
{
"text": "[Q (h) j ; K (h) j ; V (h) j ] = [W (h) Q ; W (h) K ; W (h) V ]h sen j z (h) ij = Q (h) i K (h) j , \u03b2 (h) ij = exp z (h) ij j exp z (h) ij (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Context",
"sec_num": "3.2"
},
{
"text": "Then we calculate the sub-representation y (h) i for the i-th sentence by weighted summing the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Context",
"sec_num": "3.2"
},
{
"text": "V (h) j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Context",
"sec_num": "3.2"
},
{
"text": "Finally, these sub-representations are concatenated and projected, resulting in the final representation d i for the i-th sentence. We denote the number of heads as H and the sub-space dimension of each head as d sa , then we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Context",
"sec_num": "3.2"
},
{
"text": "y (h) i = j \u03b2 (h) ij V (h) j d i = W O [y (1) i ; . . . ; y (h) i ; . . . ; y (H) i ] (7) where W (h) Q , W (h) K , W (h) V \u2208 R dsa\u00d7d sh , W O \u2208 R d sh \u00d7Hdsa are projection matrices. d i combines",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Context",
"sec_num": "3.2"
},
{
"text": "representations of all sentences within this document, thus is regarded as the document representation for the i-th sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Context",
"sec_num": "3.2"
},
{
"text": "To fuse the document-level context, we first add a special token <BOS> (denoted as w i0 ) at the be- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Context",
"sec_num": "3.2"
},
{
"text": "In natural language, words themselves have different preferences on different entity types and relative positions from the entities. These preferences provide word-level contextual information for the NER task. For example, in the sentence \"With only one match before New Year, Real will spend Christmas ahead of others\", the type of the entity Real is uncertain because the context of the sentence is inadequate. However, Real prefers to appear as the first word of organizations (e.g. Real Madrid, Real Betis are football clubs). This preference helps to ensure the entity type of Real. Thus we learn and incorporate the word-level context as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Context",
"sec_num": "3.3"
},
{
"text": "To learn the word-level context, we encode the preferences with the probability distributions of word labels, because the label of a word indicates its entity type and relative position from the entities (e.g., B-ORG means the first word of an organization). To learn the distributions automatically, we propose an auxiliary word classification task and employ a two-layer neural network as the classifier. The classifier's input consists of the morphological representation c k and the word embedding w k . Besides, we add a position embedding p k to represent the relative position information:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Context",
"sec_num": "3.3"
},
{
"text": "p k = E(k; \u03b8 p ) \u2208 R dpe x k = [c k ; w k ; p k ] \u2208 R dwe+d ch +dpe (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Context",
"sec_num": "3.3"
},
{
"text": "where p k is obtained by looking up a randomlyinitialized embedding matrix and tuned during training. Then x k is fed into the two-layer classifier to predict label distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Context",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m k = tanh(W c 1 x k + b c 1 ) p label k = softmax(W c 2 m k + b c 2 )",
"eq_num": "(9)"
}
],
"section": "Word-level Context",
"sec_num": "3.3"
},
{
"text": "where to compute the loss function for word classification, which is formulated as cross-entropy loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Context",
"sec_num": "3.3"
},
{
"text": "W c 1 \u2208 R d lch \u00d7(dwe+d ch +dpe) , b c 1 \u2208 R d lch , W c 2 \u2208 R |C|\u00d7d lch , b c 2 \u2208 R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Context",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L W C (\u03b8) = \u2212 n k=1 log p label k (y k |\u03b8).",
"eq_num": "(10)"
}
],
"section": "Word-level Context",
"sec_num": "3.3"
},
{
"text": "To incorporate the word-level context, we concatenate p label ik with the original CRF input h seq ik to enrich word representations with the label distributions (Seyler et al., 2018) . The CRF takes the enhanced word representations as input and decodes the best label sequence. Our framework is jointly trained on the original NER and the auxiliary classification task via multi-task learning:",
"cite_spans": [
{
"start": 162,
"end": 183,
"text": "(Seyler et al., 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Context",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(\u03b8) = L CRF (\u03b8) + \u03bbL W C (\u03b8),",
"eq_num": "(11)"
}
],
"section": "Word-level Context",
"sec_num": "3.3"
},
{
"text": "where \u03bb is the weight of word classification loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Context",
"sec_num": "3.3"
},
{
"text": "We evaluate our approach on the CoNLL-2002 , CoNLL-2003 , and Wikigold NER datasets. The Wikigold dataset contains annotations for English (denoted as WIKI). The CoNLL-2002 dataset contains annotations for Dutch (denoted as NLD) 1 . The CoNLL-2003 dataset contains annotations for English and German (denoted as ENG and DEU respectively). All datasets are manually tagged with four different entity types (LOC, PER, ORG, MISC) . The CoNLL datasets have standard train, development, and test sets. Since the Wikigold dataset doesn't have standard separation, we randomly split the data into the three sets and perform all experiments on the same separation. Table 1 shows the number of documents and sentences of the datasets. We report the official micro-averaged F 1 scores on all the datasets. ",
"cite_spans": [
{
"start": 32,
"end": 42,
"text": "CoNLL-2002",
"ref_id": null
},
{
"start": 43,
"end": 55,
"text": ", CoNLL-2003",
"ref_id": null
},
{
"start": 405,
"end": 426,
"text": "(LOC, PER, ORG, MISC)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 657,
"end": 664,
"text": "Table 1",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments 4.1 Datasets and Evaluation Metrics",
"sec_num": "4"
},
{
"text": "In our experiments, we use the BIOES labeling scheme for output tags, which was proven to outperform other options in previous work (Ratinov and Roth, 2009 [LOC,PER,ORG,MISC] + O). For English datasets, we use the 50-dimensional Senna word embeddings (Collobert et al., 2011) and pre-process the text by lower-casing the words and replacing all digits with 0 (Chiu and Nichols, 2016; Peters et al., 2017) . For Dutch and German datasets, we use the pre-trained 300-dimensional word2vec embeddings (Mikolov et al., 2013) , which are trained on the Wikipedia dumps 2 . We adopt ELMo (Peters et al., 2018; Che et al., 2018) as the pre-trained LM embeddings 3 . The hyper-parameters of our model are shown in Table 2 . For regularization, we add 25% dropout (Srivastava et al., 2014) to the input of all BiGRUs, but not to the recurrent connections. Following Peters et al. (2017) , we use the Adam optimizer (Kingma and Ba, 2014) with gradient norms clipped at 5.0. We fine-tune the pre-trained word embeddings and ELMo model parameters. We train our model with a constant learning rate of \u03b3 = 0.001 for 20 epochs. Then we start a simple learning rate decay schedule: divide \u03b3 by ten, train for 5 epochs, divide \u03b3 by ten, train for 5 epochs again and stop. We train the model's parameters on the train set and tune the hyper-parameters on the development set. Then we compute F 1 score on the test set at the epoch with the highest development performance. Following previous work (Chiu and Nichols, 2016; Peters et al., 2017) , we train our model for multiple times with different random 2 https://github.com/Kyubyong/wordvectors 3 We also conduct experiments with TagLM+BERTBASE with released parameters. Due to the limitation of GPU memory, we didn't fine-tune BERT. The dev and test set F1 scores are 95.03\u00b10.22 and 91.64\u00b10.18 respectively. Our results have a surprisingly huge gap between the reported scores (we refer readers to Section 4.3 and 5.4 of Devlin et al. (2019) ). seeds and report the mean of F 1 .",
"cite_spans": [
{
"start": 132,
"end": 155,
"text": "(Ratinov and Roth, 2009",
"ref_id": "BIBREF29"
},
{
"start": 251,
"end": 275,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF7"
},
{
"start": 359,
"end": 383,
"text": "(Chiu and Nichols, 2016;",
"ref_id": "BIBREF4"
},
{
"start": 384,
"end": 404,
"text": "Peters et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 497,
"end": 519,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF23"
},
{
"start": 581,
"end": 602,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 603,
"end": 620,
"text": "Che et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 754,
"end": 779,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF31"
},
{
"start": 856,
"end": 876,
"text": "Peters et al. (2017)",
"ref_id": "BIBREF26"
},
{
"start": 1478,
"end": 1502,
"text": "(Chiu and Nichols, 2016;",
"ref_id": "BIBREF4"
},
{
"start": 1503,
"end": 1523,
"text": "Peters et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 1955,
"end": 1975,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 705,
"end": 712,
"text": "Table 2",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.2"
},
{
"text": "To demonstrate the effectiveness of our method, we compare our experimental results on the CoNLL-2002 and CoNLL-2003 datasets with previously published state-of-the-art models: Ando and Zhang (2005) proposed a structural learning algorithm for semi-supervised NER; Qi et al. (2009) proposed Word-Class Distribution Learning (WCDL) method; Nothman et al. (2013) introduced Wikipedia articles as extra knowledge; Gillick et al. (2015) proposed a byte-level model for multilingual NER; Lample et al. (2016) proposed BiLSTM-CRF model; Yang et al. (2017) applied transfer learning mechanism for NER; Peters et al. (2018) proposed ELMo embeddings; Clark et al. 2018proposed Cross-View Training (CVT) method; Devlin et al. (2019) proposed BERT representations; Liu et al. (2019) introduced external gazetters to this task; Akbik et al. (2018) proposed contextual character language model and achieved the stateof-the-art performance; Zhang et al. 2018and Hu et al. (2020) utilized global attention to mine document-level information; Gui et al. (2020) used memory mechanism to capture document-level label consistency. Table 3 shows the comparison results, from which we can observe that the incorporation of multi-level contexts brings 0.47%, 1.04%, and 0.88% absolute F 1 score improvement on the English, German and Dutch dataset respectively compared with our baseline model. In addition, our model outperforms most of the previous sentence-and document-level methods on the three languages. The improvements demonstrate the effectiveness of our framework, which fully exploits the document and word-level contexts and combines the multi-level contexts. With the assistance of multi-level contexts, our model can capture more contextual information beyond sentences and recognize entities more correctly.",
"cite_spans": [
{
"start": 91,
"end": 105,
"text": "CoNLL-2002 and",
"ref_id": null
},
{
"start": 106,
"end": 116,
"text": "CoNLL-2003",
"ref_id": null
},
{
"start": 186,
"end": 198,
"text": "Zhang (2005)",
"ref_id": "BIBREF1"
},
{
"start": 265,
"end": 281,
"text": "Qi et al. (2009)",
"ref_id": "BIBREF28"
},
{
"start": 339,
"end": 360,
"text": "Nothman et al. (2013)",
"ref_id": "BIBREF24"
},
{
"start": 411,
"end": 432,
"text": "Gillick et al. (2015)",
"ref_id": "BIBREF11"
},
{
"start": 483,
"end": 503,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 595,
"end": 615,
"text": "Peters et al. (2018)",
"ref_id": "BIBREF27"
},
{
"start": 702,
"end": 722,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 754,
"end": 771,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 816,
"end": 835,
"text": "Akbik et al. (2018)",
"ref_id": "BIBREF0"
},
{
"start": 948,
"end": 964,
"text": "Hu et al. (2020)",
"ref_id": "BIBREF13"
},
{
"start": 1027,
"end": 1044,
"text": "Gui et al. (2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 1112,
"end": 1119,
"text": "Table 3",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Performance Evaluation",
"sec_num": "4.3"
},
{
"text": "To study the contribution of the document-and word-level context respectively, we conduct experiments on two settings: only incorporating the word-level context and the document-level context, and compare the F 1 score with our model. Figure 3 shows the results, from which we have the following observations: (1) The document-and word-level contexts both bring improvements on the four datasets. It indicates the utility of these contexts respectively. The document-level context contains interactions between sentences within a document. The word-level context contains words' type and position preferences. Either of the contexts can help alleviate the effects of limited or ambiguous sentence context. (2) The multi-level contexts method improves the F 1 score over the other two settings on all the datasets. It validates the effectiveness of the fusion of multi-level contexts. Our framework can exploit and fuse the contexts at the document and word level simultaneously. With the assistance of more extra contextual information from the document and word level, our method performs better than the other two settings of combining only one context. Table 4 shows the comparison result on the CoNLL-2003 English test set. The first two options essentially translate h word ik in the vector space, because they enhance h word ik with the same d i for all words. Therefore they cannot fully combine the contexts. To distinguish between the latter two options, we need to focus on the internal calculation of GRU:",
"cite_spans": [],
"ref_spans": [
{
"start": 235,
"end": 243,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 1156,
"end": 1163,
"text": "Table 4",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.4"
},
{
"text": "h t = (1 \u2212 z t )n t + z t h t\u22121 , n t = tanh(W in x t + b in + r t (W hn h t\u22121 + b hn ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.4"
},
{
"text": ". GRU uses non-linearly transformed x t and raw h t to calculate hidden states. We speculate that the nonlinear transformation on d i aligns it to the same space as h word ik and produces better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.4"
},
{
"text": "In this experiment, we compare three ways of fusing word-level contextual representations p label i with the sentence-level context:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How to fuse the word-level context?",
"sec_num": "4.5.2"
},
{
"text": "\u2022 Table 5 shows the comparison results. The first two options use BiGRU to encode the label distributions but perform worse than the last one using CRF. We speculate that CRF is more suitable to encode the distributions of word label than BiGRU because there exist strong connections between two adjacent words' label distributions intuitively.",
"cite_spans": [],
"ref_spans": [
{
"start": 2,
"end": 9,
"text": "Table 5",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "How to fuse the word-level context?",
"sec_num": "4.5.2"
},
{
"text": "In this part, we compare three choices of attention mechanism: the multi-head self attention, self attention, and the most-popular neural attention mechanism. Table 6 shows the comparison results. We can observe that the self attention mechanism outperforms neural attention because it can capture interactions between sentences in the document. In contrast, the neural attention mechanism only learns the sentence's weight based on its representation, thus fails to capture the interactions. Furthermore, multi-head self attention performs better than self attention because it can capture a sentence's interactions with multiple sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 166,
"text": "Table 6",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Which attention mechanism to use at document level?",
"sec_num": "4.5.3"
},
{
"text": "We conduct experiments on different weights \u03bb to investigate its influence and illustrate the result in Figure 4 . We speculate that \u03bb controls the propor- tion of the word-level context in all contexts. When \u03bb changes, the balance of the contexts is broken, and the performance is affected. Besides, \u03bb controls the learning rate of the word label classifier's parameters. Its increase and decrease will hurt the accuracy of the label classification. Table 7 shows the comparison of the baseline and our model on two example sentences. In the first case, the ambiguity of LITTLE disturbs the baseline model. Our model finds another explicit mention Jason Little as a person (centre) in this document and correctly identifies this entity. In the second case, the Melbourne Cricket Ground (location) is wrongly classified as organization, because one can either play at a team or play at a stadium. Our model notices the two other mentions of Ground, both of which appears as the last word of location, and corrects the erroneous entity type. The examples prove that our model can mine contextual information outside sentences and recognize entities more correctly than the baseline model.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 112,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 451,
"end": 458,
"text": "Table 7",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "How to choose the weight \u03bb of the auxiliary task ?",
"sec_num": "4.5.4"
},
{
"text": "In this paper, we propose a unified structure to incorporate multi-level contexts for the NER task. We use TagLM as our baseline model to capture the sentence-level context. To incorporate the document-level context, we propose to learn relationships between sentences within a document with the multi-head self attention mechanism. Besides, to mine word-level contextual information, we propose an auxiliary task to predict the word type to capture its type preferences. Our model is jointly trained on the NER and auxiliary tasks through multi-task learning. We evaluate our model on several benchmark datasets, and the experimental results prove the effectiveness of our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The CoNLL-2002 dataset contains Dutch and Spanish data. But the Spanish data lacks the marks of doucument boundaries. Thus we only conduct experiments on the Dutch data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the National Key Research and Development Program of China under Grant number 2018YFB2101501, the National Natural Science Foundation of China under Grant numbers U1936208, U1936216.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Contextual string embeddings for sequence labeling",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A framework for learning predictive structures from multiple tasks and unlabeled data",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Kubota",
"suffix": ""
},
{
"first": "Ando",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "JMLR",
"volume": "6",
"issue": "",
"pages": "1817--1853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. JMLR, 6(Nov):1817-1853.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Towards better ud parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation",
"authors": [
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Yijia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yuxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "55--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better ud parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation. In CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 55-64.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Named entity recognition with bidirectional lstm-cnns",
"authors": [
{
"first": "P",
"middle": [
"C"
],
"last": "Jason",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 2016,
"venue": "TACL",
"volume": "4",
"issue": "",
"pages": "357--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason PC Chiu and Eric Nichols. 2016. Named en- tity recognition with bidirectional lstm-cnns. TACL, 4:357-370.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS 2014 Workshop on Deep Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence mod- eling. In NIPS 2014 Workshop on Deep Learning, December 2014.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semi-supervised sequence modeling with cross-view training",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1914--1925",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Christopher D Man- ning, and Quoc Le. 2018. Semi-supervised se- quence modeling with cross-view training. In EMNLP, pages 1914-1925.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "JMLR",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, 12(Aug):2493-2537.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. pages 4171-4186.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Boosting unsupervised relation extraction by using ner",
"authors": [
{
"first": "Ronen",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 2006,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "473--481",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronen Feldman and Benjamin Rosenfeld. 2006. Boost- ing unsupervised relation extraction by using ner. In EMNLP, pages 473-481.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning to forget: Continual prediction with lstm",
"authors": [
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Felix A Gers",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cummins",
"suffix": ""
}
],
"year": 2000,
"venue": "Neural Computation",
"volume": "12",
"issue": "10",
"pages": "2451--2471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix A Gers, J\u00fcrgen Schmidhuber, and Fred Cummins. 2000. Learning to forget: Continual prediction with lstm. Neural Computation, 12(10):2451-2471.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Oriol Vinyals, and Amarnag Subramanya",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Cliff",
"middle": [],
"last": "Brunk",
"suffix": ""
}
],
"year": 2015,
"venue": "Multilingual language processing from bytes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1512.00103"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual language process- ing from bytes. arXiv preprint arXiv:1512.00103.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Leveraging document-level label consistency for named entity recognition",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Jiacheng",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yaqian",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yeyun",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "3976--3982",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Gui, Jiacheng Ye, Qi Zhang, Yaqian Zhou, Yeyun Gong, and Xuanjing Huang. 2020. Leveraging document-level label consistency for named entity recognition. In IJCAI, pages 3976-3982.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Leveraging multi-token entities in document-level named entity recognition",
"authors": [
{
"first": "Anwen",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zhicheng",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Ji-Rong",
"middle": [],
"last": "Wen",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "7961--7968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anwen Hu, Zhicheng Dou, Jian-Yun Nie, and Ji- Rong Wen. 2020. Leveraging multi-token entities in document-level named entity recognition. In AAAI, pages 7961-7968.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "D",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Fernando Cn",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In ICML, pages 282-289.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL-HLT, pages 260-270.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Finegrained named entity recognition using conditional random fields for question answering",
"authors": [
{
"first": "Changki",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Yi-Gyu",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "Hyo-Jung",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Soojong",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Jeong",
"middle": [],
"last": "Heo",
"suffix": ""
},
{
"first": "Chung-Hee",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hyeon-Jin",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Ji-Hyun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Myung-Gil",
"middle": [],
"last": "Jang",
"suffix": ""
}
],
"year": 2006,
"venue": "Asia Information Retrieval Symposium",
"volume": "",
"issue": "",
"pages": "581--587",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changki Lee, Yi-Gyu Hwang, Hyo-Jung Oh, Soojong Lim, Jeong Heo, Chung-Hee Lee, Hyeon-Jin Kim, Ji-Hyun Wang, and Myung-Gil Jang. 2006. Fine- grained named entity recognition using conditional random fields for question answering. In Asia Information Retrieval Symposium, pages 581-587. Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Phrase clustering for discriminative learning",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xiaoyun",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL-AFNLP",
"volume": "",
"issue": "",
"pages": "1030--1038",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In ACL-AFNLP, pages 1030-1038. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Towards improving neural named entity recognition with gazetteers",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jin-Ge",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "5301--5307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyu Liu, Jin-Ge Yao, and Chin-Yew Lin. 2019. To- wards improving neural named entity recognition with gazetteers. In ACL, pages 5301-5307.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Joint entity recognition and disambiguation",
"authors": [
{
"first": "Gang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Xiaojiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zaiqing",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "879--888",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Za- iqing Nie. 2015. Joint entity recognition and disam- biguation. In EMNLP, pages 879-888.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "1",
"issue": "",
"pages": "1064--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In ACL, volume 1, pages 1064-1074.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Named entity recognition without gazetteers",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Mikheev",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Moens",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
}
],
"year": 1999,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Mikheev, Marc Moens, and Claire Grover. 1999. Named entity recognition without gazetteers. In EACL, pages 1-8.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning multilingual named entity recognition from wikipedia",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Nothman",
"suffix": ""
},
{
"first": "Nicky",
"middle": [],
"last": "Ringland",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Tara",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "James R",
"middle": [],
"last": "Curran",
"suffix": ""
}
],
"year": 2013,
"venue": "Artificial Intelligence",
"volume": "194",
"issue": "",
"pages": "151--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R Curran. 2013. Learning mul- tilingual named entity recognition from wikipedia. Artificial Intelligence, 194:151-175.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Lexicon infused phrase embeddings for named entity resolution",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "Vineet",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2014,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "78--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Passos, Vineet Kumar, and Andrew McCal- lum. 2014. Lexicon infused phrase embeddings for named entity resolution. In CoNLL, pages 78-86.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Semi-supervised sequence tagging with bidirectional language models",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Russell",
"middle": [],
"last": "Power",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "1",
"issue": "",
"pages": "1756--1765",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Waleed Ammar, Chandra Bhagavat- ula, and Russell Power. 2017. Semi-supervised se- quence tagging with bidirectional language models. In ACL, volume 1, pages 1756-1765.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In NAACL-HLT, volume 1, pages 2227- 2237.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Combining labeled and unlabeled data with word-class distribution learning",
"authors": [
{
"first": "Yanjun",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2009,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "1737--1740",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanjun Qi, Ronan Collobert, Pavel Kuksa, Koray Kavukcuoglu, and Jason Weston. 2009. Combining labeled and unlabeled data with word-class distribu- tion learning. In CIKM, pages 1737-1740. ACM.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Design challenges and misconceptions in named entity recognition",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2009,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In CoNLL, pages 147-155.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A study of the importance of external knowledge in the named entity recognition task",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Seyler",
"suffix": ""
},
{
"first": "Tatiana",
"middle": [],
"last": "Dembelova",
"suffix": ""
},
{
"first": "Luciano",
"middle": [
"Del"
],
"last": "Corro",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Hoffart",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "2",
"issue": "",
"pages": "241--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Seyler, Tatiana Dembelova, Luciano Del Corro, Johannes Hoffart, and Gerhard Weikum. 2018. A study of the importance of external knowledge in the named entity recognition task. In ACL, volume 2, pages 241-246.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "JMLR",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929-1958.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Fast and accurate entity recognition with iterated dilated convolutions",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Belanger",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "2670--2680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In EMNLP, pages 2670-2680.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998-6008.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Improving clinical named entity recognition with global neural attention",
"authors": [
{
"first": "Guohai",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chengyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaofeng",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2018,
"venue": "Asia-Pacific Web (APWeb) and Web-Age Information Management (WAIM) Joint International Conference on Web and Big Data",
"volume": "",
"issue": "",
"pages": "264--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guohai Xu, Chengyu Wang, and Xiaofeng He. 2018. Improving clinical named entity recognition with global neural attention. In Asia-Pacific Web (APWeb) and Web-Age Information Management (WAIM) Joint International Conference on Web and Big Data, pages 264-279. Springer.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Transfer learning for sequence tagging with hierarchical recurrent networks",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Co",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.06345"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Ruslan Salakhutdinov, and William W Co- hen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. arXiv preprint arXiv:1703.06345.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Global attention for name tagging",
"authors": [
{
"first": "Boliang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Spencer",
"middle": [],
"last": "Whitehead",
"suffix": ""
},
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2018,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boliang Zhang, Spencer Whitehead, Lifu Huang, and Heng Ji. 2018. Global attention for name tagging. In CoNLL, pages 86-96.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "ginning of the sentence w i1 , . . . , w in , and feed the sentence into TagLM's bottom BiGRU to compute [h word i0 , h word i1 , . . . , h word in ]. Next we compute the document representation d i and replace h word i0 with it (requires d wh = d sh ). Then we feed them into the top BiGRU. The input of the top BiGRU contains document-and sentence-level contextual representations simultaneously. Thus its output hidden states act as the fusion of the two contexts.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "An ablation study of our framework. We compare the mean of test set F 1 score under the four settings on the four datasets. The bars indicate the standard deviation of F 1 score.",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "Analysis 4.5.1 How to fuse the document-level context?In this experiment, we propose four alternative ways to fuse document-level contextual representation d i with sentence-level contextual representa-",
"type_str": "figure",
"num": null
},
"FIGREF3": {
"uris": null,
"text": "The CoNLL 2003 English test set performance of our model with different \u03bb.",
"type_str": "figure",
"num": null
},
"TABREF2": {
"content": "<table/>",
"num": null,
"text": "|C| are the parameters of the classifier (the number of all labels denoted as |C|). During training, we use p label k",
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table/>",
"num": null,
"text": "Numbers of documents (and sentences) in datasets statistics.",
"type_str": "table",
"html": null
},
"TABREF5": {
"content": "<table><tr><td>Hyper-parameter</td><td>Value</td></tr><tr><td>Word embedding dim. (d we )</td><td>50/300</td></tr><tr><td>Sequence hidden state dim. (d sqh )</td><td>300</td></tr><tr><td>Neural attention subspace dim. (d na )</td><td>100</td></tr><tr><td>Self attention subspace dim. (d sa )</td><td>60</td></tr><tr><td>Label classifier hidden dim. (d lch )</td><td>64</td></tr><tr><td>Number of heads (H)</td><td>5</td></tr><tr><td>Weight of L W C (\u03bb)</td><td>0.1</td></tr><tr><td/><td>). Under this tagging</td></tr><tr><td/><td>scheme, the number of labels |C| = 17 ([B,I,E,S]\u00d7</td></tr></table>",
"num": null,
"text": "Character embedding dim. (d ce ) 25 Position embedding dim. (d pe ) 30 Character hidden state dim. (d ch ) 80 Word hidden state dim. (d wh ) 300 Sentence hidden state dim. (d sh ) 300",
"type_str": "table",
"html": null
},
"TABREF6": {
"content": "<table/>",
"num": null,
"text": "Hyper-parameters of our model.",
"type_str": "table",
"html": null
},
"TABREF8": {
"content": "<table/>",
"num": null,
"text": "",
"type_str": "table",
"html": null
},
"TABREF11": {
"content": "<table><tr><td>Fusion method</td><td/><td>F 1 \u00b1 std</td></tr><tr><td>Concatenate p label ik Concatenate p label ik Concatenate p label ik</td><td>with x k with h word ik with h seq ik</td><td>91.99\u00b10.14 92.33\u00b10.11 92.68\u00b10.09</td></tr></table>",
"num": null,
"text": "Comparison of different ways of fusing the document-level context on CoNLL 2003 test set.",
"type_str": "table",
"html": null
},
"TABREF12": {
"content": "<table><tr><td>Attention mechanism</td><td>F 1 \u00b1 std</td></tr><tr><td>Neural attention</td><td>92.49\u00b10.10</td></tr><tr><td>Self attention</td><td>92.52\u00b10.09</td></tr><tr><td colspan=\"2\">Multi-head self attention 92.68\u00b10.09</td></tr></table>",
"num": null,
"text": "Comparison of different ways of fusing the word-level context on CoNLL 2003 test set.",
"type_str": "table",
"html": null
},
"TABREF13": {
"content": "<table><tr><td/><td>Label</td><td>LITTLE TO MISS CAMPESE FAREWELL</td></tr><tr><td>Case</td><td colspan=\"2\">TagLM LITTLE TO MISS CAMPESE FAREWELL</td></tr><tr><td>#1</td><td>Ours</td><td>LITTLE TO MISS CAMPESE FAREWELL</td></tr><tr><td/><td>D-lvl</td><td>Centre Jason Little will miss ...</td></tr><tr><td>Case</td><td/><td/></tr><tr><td>#2</td><td/><td/></tr></table>",
"num": null,
"text": "Comparison of different attention mechanisms at document level on CoNLL 2003 test set. Label ... play at the Melbourne Cricket Ground. TagLM ... play at the Melbourne Cricket Ground. Ours ... play at the Melbourne Cricket Ground. W-lvl 1. ... the Sydney Cricket Ground ... 2. ... the Melbourne Cricket Ground ...",
"type_str": "table",
"html": null
},
"TABREF14": {
"content": "<table/>",
"num": null,
"text": "Comparison between the baseline and our method on two cases. Blue, red and orange entities indicate the names of organizations, persons and locations. The bold words are word-level (W-lvl) or documentlevel (D-lvl) supporting contextual evidence.",
"type_str": "table",
"html": null
}
}
}
}