ACL-OCL / Base_JSON /prefixK /json /K18 /K18-1017.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K18-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:09:35.848291Z"
},
"title": "Learning text representations for 500K classification tasks on Named Entity Disambiguation",
"authors": [
{
"first": "Ander",
"middle": [],
"last": "Barrena",
"suffix": "",
"affiliation": {
"laboratory": "IXA NLP Group UPV",
"institution": "EHU University of the Basque Country Donostia",
"location": {
"settlement": "Basque Country"
}
},
"email": "ander.barrena@ehu.eus"
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": "",
"affiliation": {
"laboratory": "IXA NLP Group UPV",
"institution": "EHU University of the Basque Country Donostia",
"location": {
"settlement": "Basque Country"
}
},
"email": "a.soroa@ehu.eus"
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": "",
"affiliation": {
"laboratory": "IXA NLP Group UPV",
"institution": "EHU University of the Basque Country Donostia",
"location": {
"settlement": "Basque Country"
}
},
"email": "e.agirre@ehu.eus"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Named Entity Disambiguation algorithms typically learn a single model for all target entities. In this paper we present a word expert model and train separate deep learning models for each target entity string, yielding 500K classification tasks. This gives us the opportunity to benchmark popular text representation alternatives on this massive dataset. In order to face scarce training data we propose a simple data-augmentation technique and transfer-learning. We show that bagof-word-embeddings are better than LSTMs for tasks with scarce training data, while the situation is reversed when having larger amounts. Transferring an LSTM which is learned on all datasets is the most effective context representation option for the word experts in all frequency bands. The experiments show that our system trained on out-ofdomain Wikipedia data surpasses comparable NED systems which have been trained on indomain training data.",
"pdf_parse": {
"paper_id": "K18-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "Named Entity Disambiguation algorithms typically learn a single model for all target entities. In this paper we present a word expert model and train separate deep learning models for each target entity string, yielding 500K classification tasks. This gives us the opportunity to benchmark popular text representation alternatives on this massive dataset. In order to face scarce training data we propose a simple data-augmentation technique and transfer-learning. We show that bagof-word-embeddings are better than LSTMs for tasks with scarce training data, while the situation is reversed when having larger amounts. Transferring an LSTM which is learned on all datasets is the most effective context representation option for the word experts in all frequency bands. The experiments show that our system trained on out-ofdomain Wikipedia data surpasses comparable NED systems which have been trained on indomain training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named Entity Disambiguation (NED), also known as Entity Linking or Entity Resolution, is a task where entity mentions in running text need to be linked to its entity entry in a Knowledge Base (KB), such as Wikidata, Wikipedia or other derived resources like DBpedia (Bunescu and Pasca, 2006; McNamee and Dang, 2009; Hoffart et al., 2011) . This task is challenging, as some entity mentions like \"London\" can refer to a number of places, people, fictional characters, brands, movies, books or songs.",
"cite_spans": [
{
"start": 266,
"end": 291,
"text": "(Bunescu and Pasca, 2006;",
"ref_id": "BIBREF4"
},
{
"start": 292,
"end": 315,
"text": "McNamee and Dang, 2009;",
"ref_id": "BIBREF21"
},
{
"start": 316,
"end": 337,
"text": "Hoffart et al., 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a mention in context, NED methods (Cucerzan, 2007; Han and Sun, 2011; Ratinov et al., 2011; Lazic et al., 2015) typically rely on three models: (1) a mention model which collects possible entities which can be referred to by the mention string (aliases or surface forms), possibly weighted according to prior probabilities; (2) a context model which measures to which extent the entities fit well in the context of the mention, using textual features; (3) a coherence model which prefers entities that are related to the other entities in the document. The first and second models are local in that they only require a short context of occurrence and disambiguate each mention in the document separately. The third model is global, in that all mentions are disambiguated simultaneously (Ratinov et al., 2011) . Recent work has shown that local models can be improved adding a global coherence model (Ratinov et al., 2011; Globerson et al., 2016) . In this work we focus on a local model, and a global model could improve the results further.",
"cite_spans": [
{
"start": 40,
"end": 56,
"text": "(Cucerzan, 2007;",
"ref_id": "BIBREF9"
},
{
"start": 57,
"end": 75,
"text": "Han and Sun, 2011;",
"ref_id": "BIBREF14"
},
{
"start": 76,
"end": 97,
"text": "Ratinov et al., 2011;",
"ref_id": "BIBREF25"
},
{
"start": 98,
"end": 117,
"text": "Lazic et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 792,
"end": 814,
"text": "(Ratinov et al., 2011)",
"ref_id": "BIBREF25"
},
{
"start": 905,
"end": 927,
"text": "(Ratinov et al., 2011;",
"ref_id": "BIBREF25"
},
{
"start": 928,
"end": 951,
"text": "Globerson et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "All local and global systems mentioned above, as well as the current state-of-the-art systems (Lazic et al., 2015; Globerson et al., 2016; Yamada et al., 2016; Ganea and Hofmann, 2017) , rely on single models for each of the above, that is, they have a single mention model, context model and coherence model for all entities, e.g. the 500K ambiguous entity mentions occurring more than 10 times in Wikipedia. While this has the advantage of reusing the parameters across mentions, it also makes the problem unnecessarily complex.",
"cite_spans": [
{
"start": 94,
"end": 114,
"text": "(Lazic et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 115,
"end": 138,
"text": "Globerson et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 139,
"end": 159,
"text": "Yamada et al., 2016;",
"ref_id": "BIBREF30"
},
{
"start": 160,
"end": 184,
"text": "Ganea and Hofmann, 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we propose to break the task of NED into 500K classification tasks, one for each target mention, as opposed to building a single model for all 500K mentions. The advantage of this approach is that each of the 500K classification tasks is simpler, as the classifier needs to focus on learning a good context model for a single mention and a limited set of entities (those returned by the mention model). On the negative side, training instances for mentions follow a long tail distribution, with some mentions having a huge number of examples, but with the vast majority of mentions having very limited training data, e.g. 10 occurrences linking to an entity in Wikipedia. Our results will show that data-augmentation and transfer learning allow us to overcome the sparseness problem, yielding the best results among local systems, very close to the best local/global combined systems. Contrary to systems trained on in-domain data (Cucerzan, 2012; Chisholm and Hachey, 2015; Globerson et al., 2016; Yamada et al., 2016; Sil et al., 2018) , ours is trained on Wikipedia and tested out-of-domain.",
"cite_spans": [
{
"start": 945,
"end": 961,
"text": "(Cucerzan, 2012;",
"ref_id": "BIBREF10"
},
{
"start": 962,
"end": 988,
"text": "Chisholm and Hachey, 2015;",
"ref_id": "BIBREF7"
},
{
"start": 989,
"end": 1012,
"text": "Globerson et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 1013,
"end": 1033,
"text": "Yamada et al., 2016;",
"ref_id": "BIBREF30"
},
{
"start": 1034,
"end": 1051,
"text": "Sil et al., 2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "From another perspective, a set of 500K classification problems provides a great experimental framework for testing text representation and classification algorithms. More specifically, deep learning methods provide end-to-end algorithms to learn both representations and classifiers jointly . In fact, learning text representations models has become a center topic in natural language understanding, as it allows to transfer representation models across tasks (Conneau and Kiela, 2018; Peters et al., 2018; Wang et al., 2018) . In this paper, we explore several popular text representation options, as well as dataaugmentation (Zhang and LeCun, 2015) and transfer learning (Bengio, 2012) . All training examples and models in this paper, as well as the pytorch code to reproduce results is availabe 1 . This paper is structured as follows. We first present our models. Section 3 presents the experiments, followed by related work and conclusions.",
"cite_spans": [
{
"start": 461,
"end": 486,
"text": "(Conneau and Kiela, 2018;",
"ref_id": "BIBREF8"
},
{
"start": 487,
"end": 507,
"text": "Peters et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 508,
"end": 526,
"text": "Wang et al., 2018)",
"ref_id": "BIBREF29"
},
{
"start": 628,
"end": 651,
"text": "(Zhang and LeCun, 2015)",
"ref_id": "BIBREF32"
},
{
"start": 674,
"end": 688,
"text": "(Bengio, 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section we describe the deep learning models proposed in this work. We first present our use of Wikipedia to produce the candidate model and the training instances, followed by the deep learning models. We will mention options and hyperparameters as we explain each component. Unless explicitly stated we used default values. The rest were selected and tuned solely on development data from Wikipedia itself, with no access to other datasets (cf. Section 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Learning models for NED",
"sec_num": "2"
},
{
"text": "We used the English Wikipedia 2 as the only resource for training the models. On the one hand, Wikipedia articles define the target set of entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing and Resources",
"sec_num": "2.1"
},
{
"text": "On the other hand, Wikipedia editors have manually added hyperlinks to articles, where the anchor text corresponds to the mention, and the url corresponds to the entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing and Resources",
"sec_num": "2.1"
},
{
"text": "We first built a candidate model as a dictionary that links each text anchor to possible entities, using the method presented in (Spitkovsky and Chang, 2012; Barrena et al., 2016) . Let M be the set of all unique mention strings m, E the set of all target entities e, and E m = {e 1 , . . . , e m } the set of entities that can be referred by mention m. We kept the 30 most frequent candidates for each mention for the sake of efficiency. We report the sizes of E and M below.",
"cite_spans": [
{
"start": 145,
"end": 157,
"text": "Chang, 2012;",
"ref_id": "BIBREF27"
},
{
"start": 158,
"end": 179,
"text": "Barrena et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing and Resources",
"sec_num": "2.1"
},
{
"text": "We then extracted annotated examples by scanning through the page contents for hyperlinks that link anchors (the mentions) to the corresponding Wikipedia pages (the entities). For each such hyperlink, we build a context c by first tokenizing and removing the stop words, and then extracting a window of 20 words to the left and 20 words to the right from the anchor. We thus construct a set of N m labeled instances {c i , y i }, where y i \u2208 E m , for each m. We did not apply any kind of lemmatization or stemming to the training contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing and Resources",
"sec_num": "2.1"
},
{
"text": "In the word expert approach we train one classifier for each possible ambiguous mention. We are thus interested in learning a classifier that assigns a target mention m \u2208 M appearing in a context c to one of its possible entity candidates E m = {e 1 , . . . , e m } based on the set of N m training instances {c i , y i }, where y i \u2208 E m . From the approximately 1M ambiguous mentions in Wikipedia only 523K occur more than 10 times as anchors in Wikipedia, and we thus limit M to 523 mentions and learn 523K classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Expert models",
"sec_num": "2.2"
},
{
"text": "Given the textual context of a mention, c i , the text representations model will output a vector representation h. We tried different alternatives for representing context, as described below. Given the vector h, we define the classifier as a neural network consisting of a number of fully connected layers, followed by a softmax layer with as many output dimension as the number of candidates of the target mention E m . The whole network (representation model and classifier) is trained end-to-end using cross-entropy loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Expert models",
"sec_num": "2.2"
},
{
"text": "In order to tune the hyper-parameters, we split the examples into training (90%) and development (10%). We tried different configurations of the classifier, such as the number of fully connected layers or the activation function. Two layers of Re-LUs performed best in the development set. The rest of parameters were set by default: 256 hidden units, adam optimizer with an initial learning rate of 1.0e \u2212 3 and batches of 256 instances. Training stops when the accuracy in the development set drops for 10 consecutive epochs or when a maximum of 300 epochs is reached. We select the model that obtained the best accuracy in the development set before stopping. The same parameters and model were used for all word experts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Expert models",
"sec_num": "2.2"
},
{
"text": "We now describe the how to represent context. Sparse bag-of-words (BoW): In this model, depicted in Figure 1 (a), the context is represented as the addition of the one-hot vector for each word, with as many dimensions as the vocabulary size. The target mention is assigned a zero vector, akin to ignoring it. The vocabulary is large, comprising more than 200K different words, which slowed down learning. Alternatively, we also clustered the words in the vocabulary. In this case, we use those clusters to represent the words in the onehot vector, yielding a bag of clusters representation. We used the word2vec 3 toolkit to build the clusters from English Wikipedia text. The corpus was lower-cased and tokenized. We found that using 3K cluster size does best in development. 4 As the results on development for the models using words were below those of clusters we will report only results for clusters. Continuous bag-of-words (CBoW): In this case, see Figure 1 (b), context is represented with the centroid of pre-trained word embeddings, where the mention is represented by a vector of zeros. The embeddings were trained over the English Wikipedia using word2vec (Mikolov et al., 2013) . The corpus was first lower-cased, and we used a window size of 20, 10 negative samples and 7 iterations. The embeddings have a dimensionality of 300. We also tested a number of pre-trained embeddings, but we did not obtain better results, perhaps because our embeddings were trained on Wikipedia, which is also the training corpus for the NED system. When combined with the classifiers, we kept the embeddings fixed.",
"cite_spans": [
{
"start": 777,
"end": 778,
"text": "4",
"ref_id": null
},
{
"start": 1169,
"end": 1191,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 957,
"end": 965,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Word Expert models",
"sec_num": "2.2"
},
{
"text": "Recurrent Neural Network (LSTM): As a third alternative, we considered a recurrent neural network based on LSTMs (Hochreiter and Schmidhuber, 1997) to exploit the dependencies among the word sequence that forms the input context ( Figure 1 , (c)). We use a single LSTM to encode the input contexts as follows. We first replace the target mention with a special symbol which has a manually assigned constant embedding, and then feed the sequence into the LSTM. The last hidden vector is taken to represent the context. The LSTMs have 512 hidden units and 300 dimensional word embeddings, which are initialized with the embeddings vectors used in the continuous BoW model described above. The LSTM layers have a dropout layer, with 0.2 dropout probability.",
"cite_spans": [
{
"start": 113,
"end": 147,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 231,
"end": 239,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Word Expert models",
"sec_num": "2.2"
},
{
"text": "We explored GRUs, stacking LSTMs, temporal average and max pooling among hidden states, but did not improve results on development.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Expert models",
"sec_num": "2.2"
},
{
"text": "One of the main problems of word experts is that they need a large number of manually annotated examples for each possible mention, which makes it unsuitable for less frequent mentions. As an alternative, we also trained a single model. Given the set of all training instances N m for all possible mentions m \u2208 M , we train a classifier that, for each context c i produces the correct entity e i \u2208 E. This classifier has a large number of classes |E|. We discarded entities with less than 50 mentions, and gather up to 5K random instances for the rest. Note that clipping the instance number to 5K effectively downsamples those entities that are highly frequent. All in all, we gather a training corpus of 53M annotated examples for 248K target entities in this single model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single model",
"sec_num": "2.3"
},
{
"text": "We adopted the recurrent model presented in the previous section. In this case, we also replace the target mention with a special symbol which has a manually assigned constant embedding vector, we feed it into the LSTM, and use the last hidden vector h as the context representation. The classifier follows the same architecture as the word expert model. In this case the LSTM has 2048 hidden units, producing a 512-dimensional context representation and 300 dimensional word embeddings, which are again initialized with the pretrained embeddings in the previous section keeping them fixed. The final softmax layer has 248K dimensions, the number of candidate entities. We checked other hidden-unit sizes with no better results. In order to improve results, we filter out the candidates which are not in dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single model",
"sec_num": "2.3"
},
{
"text": "Regarding the training details, we use the Adam optimization algorithm with an initial learning rate of 1.0e \u22124 , and a dropout value of 0.2. In this case, we used a 1% sample of Wikipedia instances as a validation set, and we stop early, whenever the accuracy in this validation set does not improve for 3 consecutive epochs. Training the model takes around 16 hours per epoch in a single GPU, taking at most 18 iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single model",
"sec_num": "2.3"
},
{
"text": "As an alternative to learning a single model, we can use the text representation layer of the aforementioned single model in the word expert model. That is, after training the single model with the whole Wikipedia, we use the learned model of the LSTM as the text representation layer of the word expert models. This way, we reuse the LSTM which was learned alongside the single model instead of learning a separate LSTM layer for each word expert (see Section 2.2). When training the word experts, we keep the LSTM layer fixed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer learning",
"sec_num": "2.4"
},
{
"text": "As mentioned above, some mentions only have as few as 10 training instances. In order to have a larger number of training instances, we augment the training set for target mention m with the contexts of other mentions that occur as anchors of one of the e i candidates m (e i \u2208 E m ). Using this strategy, we randomly select up to 250K examples as training instances for each mention. Although augmenting the training set has the advantage of providing more training instances, it also has the drawback of distorting the number of examples for each entity. For instance, in the case of the mention EU most of the examples in Wikipedia refer to the European Union (around 2800) and only a few to Europe (the continent, 716). When augmenting the training set with examples for the entities, we add more examples for Europe (around 44000) than for European Union (around 13000), changing the ratio of labels in the training data significantly. In order to counter-balance this effect, we tried to combine the priors from the original data with the output of the classifier trained with the augmented dataset. Alternatively, we combined both original and augmented classifiers, yielding better results in development. We thus train two classifiers for each mention, one using the original training set P (e|c) orig , and one using the augmented dataset P (e|c) aug . Finaly, we combine their scores to produce the combined output P (e|c):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data augmentation",
"sec_num": "2.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (e|c) = P (e|c) orig P (e|c) aug",
"eq_num": "(1)"
}
],
"section": "Data augmentation",
"sec_num": "2.5"
},
{
"text": "3 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data augmentation",
"sec_num": "2.5"
},
{
"text": "We developed and evaluated our model in standard datasets for easier comparison to the state of the art. use Aidatesta for model selection only (i.e. the parameters were tuned on a subset of Wikipedia, cf. Section 2), and Aidatestb, Tac2010, Tac2011 and Tac2012 for out-of-domain test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data augmentation",
"sec_num": "2.5"
},
{
"text": "Note that we used Aidatesta only to select the best models, given that all hyperparameters where tuned over Wikipedia itself. Table 1 shows the statistics for all datasets. From all mentions, only a subset of them actually refers to an entity in the KB provided by the dataset authors (\"inKB mentions\" row). Our dictionary covers most but not all of those KB entities (\"uniq inKB mentions in dict\" row). Some of the mentions in the datasets are resolved as NIL, for cases where the mention refers to an entity which is not in the KB. The simplest method to return NIL is to first resolve over all Wikipedia entities, and if the selected entity is not in the KB then to return NIL. We focus the evaluation in the mentions linked to an entity in the respective KB, and use the so-called inKB accuracy as the evaluation measure, which is defined as the fraction of correctly disambiguated mentions divided by the total number of mentions which are linked to the KB. We perform 3 runs for each reported result, reporting mean accuracy and standard deviation values. We also include MFS baselines in the results: given a mention, the baseline is computed assigning the entity in the dictionary with highest prior probabilities.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data augmentation",
"sec_num": "2.5"
},
{
"text": "During testing, given a mention, we search the document and try to find the longest string that a) contains the mention and b) matches an entry in the dictionary. Next, we replace every mention string with that longer string in the document. 5 We also apply the 'One entity per Document' hypothesis, averaging the results of the occurrences for the same mention in the same document (Barrena et al., 2014) . Table 2 shows the performance of each of the context representation models and data augmentation options in Aidatesta. The MFS baseline obtains 71.91, which is a good point of comparison to benchmark our candidate model (the dictionary) with respect to other systems. All our models improve over the MFS baseline by a large margin. As mentioned in Section 2.5, we have three classifiers for each mention. P (e|c) orig uses the original training set, P (e|c) aug uses the augmented training set, and P (e|c) combines both. The table shows that the results of the original and augmented classifiers are more or less comparable, while the combination consistently yields the best results for all context representations options.",
"cite_spans": [
{
"start": 242,
"end": 243,
"text": "5",
"ref_id": null
},
{
"start": 383,
"end": 405,
"text": "(Barrena et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 408,
"end": 415,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data augmentation",
"sec_num": "2.5"
},
{
"text": "Regarding the representation models, we can observe that the sparse Bag-of-word model yields worse results than the continuous Bag-of-words. The LSTMs learned separately do not improve over continuous BoW, while the LSTM transferred from the single model obtains the best results. In addition to the results in the table, the single model obtains an accuracy of 45.95, well below the rest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development results",
"sec_num": "3.1"
},
{
"text": "These results confirm our intuitions. Regarding the single model vs. word experts, the classifier has a much easier task in the second case, as the number of classes to predict is much smaller for each classifier. Regarding the performance of the word expert LSTMs, our hypothesis was that, given the long tail distribution of the number of training instances, the per-mention LSTMs of many mentions would not have enough training instances to learn effective representations. We checked this hypothesis plotting the results for each method according to the number of training instances. Figure 2 shows the inKB accuracy for mentions bucketed according to the number of training instances 6 . Continuous BoW overpeforms LSTMs on mentions with a small number of training instances, while the situation is reversed for mentions with a large number of training instances. The graph also shows that the transferred LSTM yields better results for all frequencies, and that the Sparse BoW model underperforms the rest of models consistently. As an aside, we observed that for the words which have more than 200.000 training instances, both the per-mention LSTM and the transferred LSTM yield similar results.",
"cite_spans": [],
"ref_spans": [
{
"start": 588,
"end": 596,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Development results",
"sec_num": "3.1"
},
{
"text": "In this section we compare our system with the state-of-the-art in Named Entity Disambiguation. Given the vast number of NED systems, we only report the results of the most relevant high performing systems only. Note that, contrary to us, many high-performing systems use in-domain training data (Ganea and Hofmann, 2017; Globerson et al., 2016) , and/or external candidates and link counts when building the dictionary (Lazic et al., 2015; Globerson et al., 2016; Yamada et al., 2016) . that all our models improve over locals out-ofdomain systems trained solely on Wikipedia, but, most notably, also over in-domain systems which were trained on Aidatrain (marked with *) and the semi-supervised system (marked with \u2020), which uses large numbers of un-annotated data. As expected, the relative performance of our systems is the same as in development.",
"cite_spans": [
{
"start": 296,
"end": 321,
"text": "(Ganea and Hofmann, 2017;",
"ref_id": "BIBREF11"
},
{
"start": 322,
"end": 345,
"text": "Globerson et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 420,
"end": 440,
"text": "(Lazic et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 441,
"end": 464,
"text": "Globerson et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 465,
"end": 485,
"text": "Yamada et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Final results",
"sec_num": "3.2"
},
{
"text": "All the systems included in Table 3 (except Lazic et al., 2015) use the \"means\" tables of YAGO as candidates, as this was the entity inventory used by the developers of the dataset (Hoffart et al., 2011) . In our case, as we link mention to Wikipedia entities, we just ignore those entities not belonging to the YAGO \"means\" table. In order to provide head-to-head comparison, the results of our best system when not using the YAGO information is 89.93, more than three points better. Table 4 shows the inKB accuracy results on the three TAC datasets. In this case, the dataset is accompanied by a KB which is a subset of the Wikipedia 2008 snapshot. Following standard procedure (Globerson et al., 2016), we fil-Method tac10 tac11 tac12 Local models (Lazic et al., 2015) ter out entities not listed in the KB before evaluating the results. The table shows that the relative performance of our systems is stable. Our best system outperforms the results of the local system trained on Wikipedia on both Tac2011 and Tac2012 (10 and 3 points). Regarding the comparison with other local systems (in-domain and semi-supervised), our results are the best, including global methods. The only exception is on the TAC2012 dataset, where the mentions were short and known to be specially challenging. In fact, the winner of the task (Cucerzan, 2012) performed an especial effort on finding longer correferent mentions in the document. In the case of (Lazic et al., 2015; Globerson et al., 2016) , they use a coreference resolver, which could explain their better results on this dataset. Note that in this section we do not report results of systems which use the candidate dictionary of (Pershina et al., 2015) . As observed by (Globerson et al., 2016) , among others, that candidate dictionary has been manually pruned and extended to contain the gold standard entity, yielding a dictionary that has a 100% upperbound and very limited ambiguity. This makes the results of systems using this dictionary look much better than those using automatically constructed candidate models. We thus miss results from some papers (Pershina et al., 2015; Sil et al., 2018) , and report the results using automatically constructed dictionaries for the rest (e.g. Globerson et al., 2016) .",
"cite_spans": [
{
"start": 181,
"end": 203,
"text": "(Hoffart et al., 2011)",
"ref_id": "BIBREF16"
},
{
"start": 751,
"end": 771,
"text": "(Lazic et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 1323,
"end": 1339,
"text": "(Cucerzan, 2012)",
"ref_id": "BIBREF10"
},
{
"start": 1440,
"end": 1460,
"text": "(Lazic et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 1461,
"end": 1484,
"text": "Globerson et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 1678,
"end": 1701,
"text": "(Pershina et al., 2015)",
"ref_id": "BIBREF23"
},
{
"start": 1719,
"end": 1743,
"text": "(Globerson et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 2110,
"end": 2133,
"text": "(Pershina et al., 2015;",
"ref_id": "BIBREF23"
},
{
"start": 2134,
"end": 2151,
"text": "Sil et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 2241,
"end": 2264,
"text": "Globerson et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 485,
"end": 492,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Final results",
"sec_num": "3.2"
},
{
"text": "In this section we will briefly review NED systems and text representation literature. Hachey et al. (2012) present a detailed overview of all possible components, but in this section we will focus on the most relevant high performing systems. Please see (Ling et al., 2015) for a more detailed review of past research.",
"cite_spans": [
{
"start": 87,
"end": 100,
"text": "Hachey et al.",
"ref_id": null
},
{
"start": 255,
"end": 274,
"text": "(Ling et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Among local systems that are trained on Wikipedia alone, (Lazic et al., 2015) was the best performing one to date. Their system is based on probabilistic estimation, with a rich preprocessing pipeline, including dependency parsing, common noun phrase identificacion and coreference resolution. They present the results for both a supervised version, and a graphbased semi-supervised extension which improves results. We think that the results of our method could be improved using richer pre-processing, specially the use of coreference to find longer coreferent mentions, which reduces the ambiguity of the mention and improve results.",
"cite_spans": [
{
"start": 57,
"end": 77,
"text": "(Lazic et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NED systems",
"sec_num": "4.1"
},
{
"text": "Among global models, (Chisholm and Hachey, 2015) use a learning to rank algorithm which combines local and global features, trained on in-domain corpora (Aidatrain and Tac2010 train, respectively). They improve the results significantly by extending the information extracted from Wikipedia with a web crawl.",
"cite_spans": [
{
"start": 21,
"end": 48,
"text": "(Chisholm and Hachey, 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NED systems",
"sec_num": "4.1"
},
{
"text": "In (Yamada et al., 2016) , the authors jointly learn word and entity embeddings using Wikipedia. The similarity of word and entity embeddings are used as features to train a Gradient Boosted Regression Trees on in-domain data. They report both local and global results, with a clear improvement when adding a global component.",
"cite_spans": [
{
"start": 3,
"end": 24,
"text": "(Yamada et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NED systems",
"sec_num": "4.1"
},
{
"text": "Ganea and Hofmann (2017) also present a local and global algorithm. In their local algorithm, they combine word and entity embeddings with an attention mechanism trained on in-domain data. The global component is Loopy Belief Propagation, which optimizes the global sequence coherence initialized by the local algorithm. They report the best results among both local and global algorithms in Aidatestb, but, unfortunately they don not provide results on the TAC datasets. Given that their global algorithm yields an improvement of 3 points, and that our local method exploits complementary information, we would like to combine both in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NED systems",
"sec_num": "4.1"
},
{
"text": "Globerson et al. (2016) add a global compo-nent to Plato (Lazic et al., 2015) , whose weights are used to initialize a multi-focal attention mechanism. The global model is trained and optimized on in-domain training datasets. They report the best performance for TAC datasets to date, and very good results on Aida. Their very strong results on TAC 2012 (together with those of Lazic et al., 2015) seem to be due to the use coreference in the candidate model, as this dataset includes shorter target mentions than the rest. More recently, Sil et al. (2018) introduce a deep neural cross-lingual entity linking system using a combination of CNN, LSTM and NTNs, with strong results. Their method performs similar to ours on TAC2010, but using the manually curated dictionary of (Pershina et al., 2015) , which, as stated before, greatly simplifies the task (c.f. Section 3.2).",
"cite_spans": [
{
"start": 57,
"end": 77,
"text": "(Lazic et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 539,
"end": 556,
"text": "Sil et al. (2018)",
"ref_id": "BIBREF26"
},
{
"start": 776,
"end": 799,
"text": "(Pershina et al., 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NED systems",
"sec_num": "4.1"
},
{
"text": "All NED systems mentioned above build a single model for all possible target mentions. The only word expert approach that we are aware of is briefly mentioned in (Chang et al., 2016) . This paper compares NED and word sense disambiguation, and builds a bag of words logistic regression classifier for each mention. Their results on the TAC2010 dataset is 84.5, below our results.",
"cite_spans": [
{
"start": 162,
"end": 182,
"text": "(Chang et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NED systems",
"sec_num": "4.1"
},
{
"text": "Text representation for deep learning is a hot topic on natural language processing , and several evaluation frameworks have been proposed (Conneau and Kiela, 2018; Wang et al., 2018) . Our 500K classification tasks can be seen as an additional large-scale testbed for text representation proposals.",
"cite_spans": [
{
"start": 139,
"end": 164,
"text": "(Conneau and Kiela, 2018;",
"ref_id": "BIBREF8"
},
{
"start": 165,
"end": 183,
"text": "Wang et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text representation",
"sec_num": "4.2"
},
{
"text": "In a setting similar to ours, (Yuan et al., 2016; Peters et al., 2018) propose to train a language model based on LSTMs and then use it for word sense disambiguation. Instead of using the context representations to learn a classifier directly as we do, they use label propagation in representation space. In our case, instead of using a language model, we train the text representation model on a more closely related task, i.e., that of disambiguating all possible entities.",
"cite_spans": [
{
"start": 30,
"end": 49,
"text": "(Yuan et al., 2016;",
"ref_id": "BIBREF31"
},
{
"start": 50,
"end": 70,
"text": "Peters et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text representation",
"sec_num": "4.2"
},
{
"text": "While bags of pre-trained word embeddings and LSTMs are the most popular approaches for text representation, many alternatives exist. For instance, ELMo (Peters et al., 2018) obtains word embeddings that include contextual information, and then combine them using bag-of-words or other alternative. Alternatively, universal sentence encoding models that are useful in many tasks are being proposed (Arora et al., 2017; Logeswaran and Lee, 2018; Subramanian et al., 2018; Cer et al., 2018) . We think that, in supervised classification tasks such as ours, the transferred LSTM already captures well contextual information and that the performance bottleneck might lie on the classifier. If that is the case, stronger context representation models might not make much of a difference. We plan to explore this in future work.",
"cite_spans": [
{
"start": 153,
"end": 174,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 398,
"end": 418,
"text": "(Arora et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 419,
"end": 444,
"text": "Logeswaran and Lee, 2018;",
"ref_id": "BIBREF20"
},
{
"start": 445,
"end": 470,
"text": "Subramanian et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 471,
"end": 488,
"text": "Cer et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text representation",
"sec_num": "4.2"
},
{
"text": "In this paper we propose to break the task of NED into 500K classification tasks, one for each target mention, as opposed to building a single model for all 500K mentions. The advantage of this word expert approach is that each of the 500K classification tasks is simpler. On the negative side, scarcity of training data is made worse. We show that this problem can be effectively alleviated with dataaugmentation and specially with transfer learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "A set of 500K classification problems provides a great experimental framework for testing text representation and classification algorithms. Given the scarce data available, learning a classifier directly on a bag-of-words or LSTM representation yields weak results. Bringing in pre-trained embeddings improves results, but the key to strong performance is to learn a single model for all entities using an LSTM and then transfer the LSTM to each of the word experts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "Our model is a local system using Wikipedia information alone, yielding the best results among local systems, comparable to systems trained on in-domain data and incorporating global coherence models. All training examples and models in this paper, as well as the pytorch code to reproduce results is availabe 7 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "For the future, the performance of our system can be easily improved combining it with a global method such as (Ganea and Hofmann, 2017). There are also specific improvements that can be done, such as using correference (Lazic et al., 2015) or additional information from web crawls (Chisholm and Hachey, 2015) . Regarding the use of in-domain training, we think that our out-of-domain results reflect the most realistic scenario, as in-domain training data is rare in practice.",
"cite_spans": [
{
"start": 220,
"end": 240,
"text": "(Lazic et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 283,
"end": 310,
"text": "(Chisholm and Hachey, 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "Regarding text representation, we tested some straightforward alternatives. Recent work has proposed stronger options which could improve the results of our word experts further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "https://github.com/anderbarrena/ 500kNED2 We chose the 2014 snapshot, which gives good results in the contemporary evaluation datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://code.google.com/archive/p/ word2vec/4 We tried 100,300,800,1K,3K,8K and 10K cluster sizes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Mentions that are named as a DBPedia entity classified as location are not expanded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We set 10 buckets with an equal number of mentions in each bucket",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/anderbarrena/ 500kNED",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was partially supported by the Spanish MINECO (TUNER TIN2015-65308-C5-1-R, MUSTER PCIN-2015-226, cofunded by EU FEDER), the UPV/EHU (excellence research group), and the NVIDIA GPU grant program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A simple but tough-to-beat baseline for sentence embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In International Conference on Learning Representations.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "one entity per discourse\" and \"one entity per collocation\" improve named-entity disambiguation",
"authors": [
{
"first": "Ander",
"middle": [],
"last": "Barrena",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Cabaleiro",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2260--2269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ander Barrena, Eneko Agirre, Bernardo Cabaleiro, Anselmo Pe\u00f1as, and Aitor Soroa. 2014. \"one entity per discourse\" and \"one entity per collocation\" im- prove named-entity disambiguation. In Proceedings of COLING 2014, the 25th International Confer- ence on Computational Linguistics: Technical Pa- pers, pages 2260-2269, Dublin, Ireland. Dublin City University and Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Alleviating poor context with background knowledge for named entity disambiguation",
"authors": [
{
"first": "Ander",
"middle": [],
"last": "Barrena",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1903--1912",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ander Barrena, Aitor Soroa, and Eneko Agirre. 2016. Alleviating poor context with background knowl- edge for named entity disambiguation. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1903-1912, Berlin, Germany. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Deep learning of representations for unsupervised and transfer learning",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ICML Workshop on Unsupervised and Transfer Learning",
"volume": "",
"issue": "",
"pages": "17--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio. 2012. Deep learning of representa- tions for unsupervised and transfer learning. In Pro- ceedings of ICML Workshop on Unsupervised and Transfer Learning, pages 17-36.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Using encyclopedic knowledge for named entity disambiguation",
"authors": [
{
"first": "R",
"middle": [
"C"
],
"last": "Bunescu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pasca",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceesings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. C. Bunescu and M. Pasca. 2006. Using encyclo- pedic knowledge for named entity disambiguation. In Proceesings of the 11th Conference of the Euro- pean Chapter of the Association for Computational Linguistics (EACL), pages 9-16, Trento, Italy. The Association for Computer Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A comparison of named-entity disambiguation and word sense disambiguation",
"authors": [
{
"first": "Angel",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Valentin",
"middle": [
"I"
],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angel Chang, Valentin I. Spitkovsky, Christopher D. Manning, and Eneko Agirre. 2016. A comparison of named-entity disambiguation and word sense dis- ambiguation. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Eval- uation (LREC 2016), Paris, France. European Lan- guage Resources Association (ELRA).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Entity disambiguation with web links",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Chisholm",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Hachey",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "145--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Chisholm and Ben Hachey. 2015. Entity dis- ambiguation with web links. Transactions of the As- sociation for Computational Linguistics, 3:145-156.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "SentEval: An Evaluation Toolkit for Universal Sentence Representations",
"authors": [
{
"first": "A",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Conneau and D. Kiela. 2018. SentEval: An Evalua- tion Toolkit for Universal Sentence Representations. ArXiv e-prints.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Large-Scale Named Entity Disambiguation Based on Wikipedia Data",
"authors": [
{
"first": "S",
"middle": [],
"last": "Cucerzan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "708--716",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Cucerzan. 2007. Large-Scale Named Entity Dis- ambiguation Based on Wikipedia Data. In Pro- ceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning (EMNLP- CoNLL), pages 708-716, Prague, Czech Republic.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Msr system for entity linking at tac 2012",
"authors": [
{
"first": "",
"middle": [],
"last": "Silviu Cucerzan",
"suffix": ""
}
],
"year": 2012,
"venue": "Text Analysis Conference -Knowledge Base Population",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silviu Cucerzan. 2012. Msr system for entity linking at tac 2012. In Text Analysis Conference -Knowledge Base Population 2012 TAC-KBP 2012.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep joint entity disambiguation with local neural attention",
"authors": [
{
"first": "Eugen",
"middle": [],
"last": "Octavian",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Ganea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2619--2629",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Octavian-Eugen Ganea and Thomas Hofmann. 2017. Deep joint entity disambiguation with local neural attention. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 2619-2629, Copenhagen, Denmark. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Collective entity resolution with multi-focal attention",
"authors": [
{
"first": "Nevena",
"middle": [],
"last": "Amir Globerson",
"suffix": ""
},
{
"first": "Soumen",
"middle": [],
"last": "Lazic",
"suffix": ""
},
{
"first": "Amarnag",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Subramanya",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Ringaard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "621--631",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Globerson, Nevena Lazic, Soumen Chakrabarti, Amarnag Subramanya, Michael Ringaard, and Fer- nando Pereira. 2016. Collective entity resolution with multi-focal attention. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 621-631, Berlin, Germany. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Evaluating Entity Linking with Wikipedia",
"authors": [
{
"first": "B",
"middle": [],
"last": "Hachey",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nothman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2012,
"venue": "Artificial Intelligence",
"volume": "194",
"issue": "",
"pages": "130--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Hachey, W. Radford, J. Nothman, M. Honnibal, and J.R. Curran. 2012. Evaluating Entity Linking with Wikipedia. Artificial Intelligence, 194:130-150.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A generative entity-mention model for linking entities with knowledge base",
"authors": [
{
"first": "X",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "945--954",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Han and L. Sun. 2011. A generative entity-mention model for linking entities with knowledge base. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies -Volume 1, HLT '11, pages 945-954, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Robust Disambiguation of Named Entities in Text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hoffart",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Yosef",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Bordino",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "F\u00fcrstenau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pinkal",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Spaniol",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Taneva",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Thater",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "782--792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Hoffart, M.A. Yosef, I. Bordino, H. F\u00fcrstenau, M. Pinkal, M. Spaniol, B. Taneva, S. Thater, and G. Weikum. 2011. Robust Disambiguation of Named Entities in Text. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, EMNLP '11, pages 782-792, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Plato: A selective context model for entity resolution",
"authors": [
{
"first": "Nevena",
"middle": [],
"last": "Lazic",
"suffix": ""
},
{
"first": "Amarnag",
"middle": [],
"last": "Subramanya",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ringgaard",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "503--515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nevena Lazic, Amarnag Subramanya, Michael Ring- gaard, and Fernando Pereira. 2015. Plato: A selec- tive context model for entity resolution. Transac- tions of the Association for Computational Linguis- tics, 3:503-515.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep learning",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "Nature",
"volume": "521",
"issue": "7553",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature, 521(7553):436.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Design challenges for entity linking",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "315--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ling, Sameer Singh, and Daniel S Weld. 2015. Design challenges for entity linking. Transactions of the Association for Computational Linguistics, 3:315-328.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "An efficient framework for learning sentence representations",
"authors": [
{
"first": "Lajanugen",
"middle": [],
"last": "Logeswaran",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence represen- tations. In International Conference on Learning Representations.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Overview of the TAC 2009 Knowledge Base Population track",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "Hoa",
"middle": [],
"last": "Dang",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul McNamee and Hoa Dang. 2009. Overview of the TAC 2009 Knowledge Base Population track. In TAC.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Personalized page rank for named entity disambiguation",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pershina",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "238--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Pershina, Yifan He, and Ralph Grishman. 2015. Personalized page rank for named entity disam- biguation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 238-243, Denver, Colorado. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proc. of NAACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Local and Global Algorithms for Disambiguation to Wikipedia",
"authors": [
{
"first": "L",
"middle": [
"A"
],
"last": "Ratinov",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Anderson",
"suffix": ""
}
],
"year": 2011,
"venue": "The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "1375--1384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L.A. Ratinov, D. Roth, D. Downey, and M. Ander- son. 2011. Local and Global Algorithms for Disam- biguation to Wikipedia. In The 49th Annual Meet- ing of the Association for Computational Linguis- tics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Ore- gon, USA, pages 1375-1384. The Association for Computer Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Neural cross-lingual entity linking",
"authors": [
{
"first": "Avirup",
"middle": [],
"last": "Sil",
"suffix": ""
},
{
"first": "Gourab",
"middle": [],
"last": "Kundu",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Wael",
"middle": [],
"last": "Hamza",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avirup Sil, Gourab Kundu, Radu Florian, and Wael Hamza. 2018. Neural cross-lingual entity linking. In AAAI2018.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A cross-lingual dictionary for english wikipedia concepts",
"authors": [
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "Angel X",
"middle": [],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2012,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "3168--3175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I Spitkovsky and Angel X Chang. 2012. A cross-lingual dictionary for english wikipedia con- cepts. In LREC, pages 3168-3175.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Subramanian, A. Trischler, Y. Bengio, and C. J Pal. 2018. Learning General Purpose Distributed Sen- tence Representations via Large Scale Multi-task Learning. ArXiv e-prints.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding",
"authors": [
{
"first": "A",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "S",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman. 2018. GLUE: A Multi-Task Bench- mark and Analysis Platform for Natural Language Understanding. ArXiv e-prints.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Joint learning of the embedding of words and entities for named entity disambiguation",
"authors": [
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Hideaki",
"middle": [],
"last": "Takeda",
"suffix": ""
},
{
"first": "Yoshiyasu",
"middle": [],
"last": "Takefuji",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "250--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the em- bedding of words and entities for named entity dis- ambiguation. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 250-259, Berlin, Germany. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Semi-supervised word sense disambiguation with neural models",
"authors": [
{
"first": "Dayu",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Doherty",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Altendorf",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers",
"volume": "",
"issue": "",
"pages": "1374--1385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dayu Yuan, Julian Richardson, Ryan Doherty, Colin Evans, and Eric Altendorf. 2016. Semi-supervised word sense disambiguation with neural models. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Con- ference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 1374-1385.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Text understanding from scratch",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1502.01710"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang and Yann LeCun. 2015. Text understand- ing from scratch. arXiv preprint arXiv:1502.01710.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Deep learning models for NED. On the left side, context models: (a) Sparse BoW, (b) Continuous BoW, (c) LSTM. On the right side the classification models: (d) word expert model, (e) single model. The transfer model first learns an LSTM on the single model, then reuses the LSTM to learn each of the word expert models.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Development results (Aidatesta) as inKB accuracy according to number of training instances.",
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"num": null,
"text": "The main dataset is the Aida CoNLL dataset which is composed of news documents from the Reuters corpus. It comprises three parts: Aidatrain training set, Aidatesta development set and Aidatestb test set. We also include the three earli-",
"content": "<table><tr><td/><td colspan=\"4\">testa testb tac2010 tac2011 tac2012</td></tr><tr><td>mentions</td><td>5917 5616</td><td>2250</td><td>2250</td><td>2229</td></tr><tr><td>inKB mentions</td><td>4792 4485</td><td>1020</td><td>1124</td><td>1177</td></tr><tr><td>uniq mentions</td><td>2600 2441</td><td>750</td><td>1315</td><td>781</td></tr><tr><td>uniq inKB mentions</td><td>1850 1685</td><td>386</td><td>628</td><td>509</td></tr><tr><td colspan=\"2\">inKB mentions in dict 1841 1675</td><td>382</td><td>597</td><td>499</td></tr></table>",
"html": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "Statistics of the datasets (see text for details).",
"content": "<table/>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "(e|c) orig 79.65\u00b10.06 82.48\u00b10.48 80.35\u00b10.05 84.70\u00b10.06 P (e|c) aug 79.54\u00b10.26 81.74\u00b10.21 80.66\u00b10.26 82.39\u00b10.42 P (e|c) 83.28\u00b10.17 86.19\u00b10.19 84.35\u00b10.30 86.87\u00b10.14",
"content": "<table><tr><td>Sparse BoW</td><td>CBoW</td><td>LSTM</td><td>Transfer</td></tr><tr><td>P</td><td/><td/><td/></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "Development results (Aidatesta) as inKB accuracy and standard deviation for Sparse BoW, Continuous BoW, LSTM and transferred LSTMs. Each row corresponds to the original training data, augmented training data, and combination.",
"content": "<table/>",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"text": "shows the inKB results on Aidatestb, the most popular evaluation dataset. The results show",
"content": "<table><tr><td>Method</td><td>testb</td></tr><tr><td>Local models</td><td/></tr><tr><td>(Lazic et al., 2015) sup.</td><td>79.7</td></tr><tr><td>Sparse BoW</td><td>86.72\u00b10.23</td></tr><tr><td>Continuous BoW</td><td>89.39\u00b10.44</td></tr><tr><td>LSTM</td><td>88.44\u00b10.26</td></tr><tr><td>Transfer LSTM</td><td>91.19\u00b10.07</td></tr><tr><td colspan=\"2\">(Lazic et al., 2015) \u2020 semi-sup. 86.4 \u2020</td></tr><tr><td>(Yamada et al., 2016)*</td><td>87.2*</td></tr><tr><td>(Ganea and Hofmann, 2017)*</td><td>88.8*</td></tr><tr><td>Local &amp; Global models</td><td/></tr><tr><td colspan=\"2\">(Chisholm and Hachey, 2015)* 88.7*</td></tr><tr><td>(Globerson et al., 2016)*</td><td>91.0*</td></tr><tr><td>(Yamada et al., 2016)*</td><td>91.5*</td></tr><tr><td>(Ganea and Hofmann, 2017)*</td><td>92.2*</td></tr></table>",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "Test results on Aidatestb as inKB accuracy.",
"content": "<table/>",
"html": null
},
"TABREF7": {
"type_str": "table",
"num": null,
"text": "Test results on TAC datasets as inKB accuracy. * for systems trained on in-domain data. \u2020 for systems using semi-supervised methods.",
"content": "<table/>",
"html": null
}
}
}
}