ACL-OCL / Base_JSON /prefixD /json /deelio /2020.deelio-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:21:38.209183Z"
},
"title": "On the Complementary Nature of Knowledge Graph Embedding, Fine Grain Entity Types, and Language Modeling",
"authors": [
{
"first": "Rajat",
"middle": [],
"last": "Patel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {
"settlement": "Baltimore County"
}
},
"email": "rpatel12@umbc.edu"
},
{
"first": "Francis",
"middle": [],
"last": "Ferraro",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {
"settlement": "Baltimore County"
}
},
"email": "ferraro@umbc.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We demonstrate the complementary natures of neural knowledge graph embedding, finegrain entity type prediction, and neural language modeling. We show that a language model-inspired knowledge graph embedding approach yields both improved knowledge graph embeddings and fine-grain entity type representations. Our work also shows that jointly modeling both structured knowledge tuples and language improves both.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We demonstrate the complementary natures of neural knowledge graph embedding, finegrain entity type prediction, and neural language modeling. We show that a language model-inspired knowledge graph embedding approach yields both improved knowledge graph embeddings and fine-grain entity type representations. Our work also shows that jointly modeling both structured knowledge tuples and language improves both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The surge in large knowledge graphs-e.g., Freebase (Bollacker et al., 2008) , DBpedia (Auer et al., 2007) , YAGO (Suchanek et al., 2007) -has induced knowledge graph-based applications. Properly making use of this structured knowledge is a prime challenge. Knowledge graph embedding [KGE] Socher et al., 2013) addresses this problem by representing the nodes (entities) and their edges (relations) in a continuous vector space. Learning these representations deduces new facts from and identifies dubious entries in the knowledge base. It also improves relation extraction , knowledge base completion and entity resolution (Nickel et al., 2011) .",
"cite_spans": [
{
"start": 51,
"end": 75,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF6"
},
{
"start": 86,
"end": 105,
"text": "(Auer et al., 2007)",
"ref_id": "BIBREF3"
},
{
"start": 113,
"end": 136,
"text": "(Suchanek et al., 2007)",
"ref_id": "BIBREF43"
},
{
"start": 283,
"end": 288,
"text": "[KGE]",
"ref_id": null
},
{
"start": 289,
"end": 309,
"text": "Socher et al., 2013)",
"ref_id": "BIBREF41"
},
{
"start": 623,
"end": 644,
"text": "(Nickel et al., 2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Entity typing can provide crucial constraints and information on the knowledge contained in a KG. While historically this has been modeled as explicitly structured knowledge, and recent work has modeled the contextual language in order to make in-context entity type classifications, we argue that language modeling techniques provide an effective approach for modeling both the explicit and implicit constraints found in both structured resources and free-form contextual language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Meanwhile, while language modeling [LM] has historically been a core problem within natural language processing (Rosenfeld, 1994) Figure 1: Our joint learning framework learns the representation for the entity \"Barack Obama's\" in the same embedding space as that of the given input contextual description, \"Barack Obama gave a speech to Congress.\" Further, by learning the entity type of '/person/politician', the model provides a better contextual understanding of the underlying entity.",
"cite_spans": [
{
"start": 35,
"end": 39,
"text": "[LM]",
"ref_id": null
},
{
"start": 112,
"end": 129,
"text": "(Rosenfeld, 1994)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "learning advances have been very successful in convincing the community of the power and flexibility of language modeling (Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019, i.a.) .",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF32"
},
{
"start": 144,
"end": 164,
"text": "Devlin et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 165,
"end": 189,
"text": "Yang et al., 2019, i.a.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Building off of insights and advances in knowledge graph embedding, entity typing, and language modeling, we identify and advocate for leveraging the complementary nature of knowledge graphs, entity typing, and language modeling. In it, we introduce a comparatively simple framework that uses powerful, yet well-known, neural building blocks to (jointly) learn representations that simultaneously capture (1) explicit facts and information stored in a knowledge base, (2) explicit constraints on facts (exemplified by entity typing), and (3) implicit knowledge and constraints communicated via natural language and discourse. Figure 1 provides an overview of the joint learning framework proposed in this work: an entity (\"Barack Obama\") along with its relations are represented in a continuous vector space. The framework also understands the underlying type (\"/person/politician\") for the given entity by learning the entity representation with contextual understanding (\"Barack Obama gave a speech to Congress\"). By using the type and the factual information the framework enhances the comprehension of the focus entity in downstream applications like language modeling. 1 We note that others have explored what KG facts have already been learned by specific, advanced/contemporary LMs (Petroni et al., 2019) . That work utilized a pre-trained BERT model and queried what types of KG facts it contains. In addition, our primary goal is not broad, state-of-the-art performance-though we demonstrate that very strong performance is achievable. Rather, our goal is to examine what the complementary strengths, and evident limitations, of language modeling techniques for knowledge and entity type representation are. In doing so, we show that our joint framework yields empirical benefits for individual tasks. Our models leverage context-independent word embeddings, and we specifically eschew language models pre-trained on web-scale data. 2 Our results further suggest that schema-free approaches to knowledge graph construction/embedding and fine grained entity typing should be studied in greater detail, and competitive, if not state-of-theart, performance can be obtained with comparatively simpler, resource-starved language models. This has promising implications for low-resource, few-shot, and/or domain-specific information extraction needs.",
"cite_spans": [
{
"start": 1174,
"end": 1175,
"text": "1",
"ref_id": null
},
{
"start": 1289,
"end": 1311,
"text": "(Petroni et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 1942,
"end": 1943,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 626,
"end": 634,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using publicly available data, our work has four main contributions. (1) It advocates for a languagemodeling based knowledge graph embedding architecture that achieves state-of-the-art performance on knowledge graph completion/fact prediction against comparable methods. (2) It introduces a neural-based technique based on both knowledge graph embedding and language modeling to predict fine-grain entity types, which yields competitive through state-of-the-art performance against comparable methods. (3) It proposes the joint learning of factual information with the underlying entity types in a shared embedding space. (4) It demonstrates that learning a knowledge graph embedding 1 Though the entity typing examples here could be interpreted as being hierarchical, our method neither assumes nor requires any type hierarchy. 2 We do not deny that current pre-trained language models can be effective for other language-based tasks beyond language modeling. However, the reason we do not use transformer LMs like BERT or GPT-2 is because the amount of data they are pre-trained with can make it difficult to (a) fairly compare to previous work (is it the modeling approach, or the underlying, large-scale data at work?), and (b) identify and track the benefits of learning our tasks jointly. model and language model in a shared embedding space are symbiotic, yielding strong KGE performance and drastic perplexity improvements. 3",
"cite_spans": [
{
"start": 829,
"end": 830,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The underlying information in the knowledge bases is difficult to comprehend and manipulate (Wang et al., 2014) . A vast number of knowledge graph embeddings techniques have been proposed over the years to mirror the entities and relations in the knowledge graphs. RESCAL (Krompa\u00df et al., 2013) is one of the first semantic-based embedding technique that captures the latent interaction between the entities and the relation. A model such as RESCAL can use graph properties to improve the underlying entity and relation representations (Padia et al., 2019; Balazevic et al., 2019; Minervini et al., 2017) . A more simplified approach is defined in DistMult (Yang et al., 2014) by restricting the relation matrix to a diagonal matrix.",
"cite_spans": [
{
"start": 92,
"end": 111,
"text": "(Wang et al., 2014)",
"ref_id": "BIBREF46"
},
{
"start": 272,
"end": 294,
"text": "(Krompa\u00df et al., 2013)",
"ref_id": "BIBREF20"
},
{
"start": 536,
"end": 556,
"text": "(Padia et al., 2019;",
"ref_id": "BIBREF30"
},
{
"start": 557,
"end": 580,
"text": "Balazevic et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 581,
"end": 604,
"text": "Minervini et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 657,
"end": 676,
"text": "(Yang et al., 2014)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Neural Tensor Network (NTN) (Socher et al., 2013) is one such technique that combines the relation specific tensors with head and tail vector representation over non-linear activation function mapped to hidden layer representation. Translational methods like TransE use distanced based models to represent entities and the relationships in the same vector space R d . TransH (Wang et al., 2014) overcomes the shortcomings of TransE by modeling the vector representation with relations specific hyperplane. TransR (Lin et al., 2015) , TransD (Ji et al., 2015) model the representation similar to TransH by having relation specific spaces and decomposing the relation specific projection matrix as a product of two vector representations respectively.",
"cite_spans": [
{
"start": 28,
"end": 49,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF41"
},
{
"start": 375,
"end": 394,
"text": "(Wang et al., 2014)",
"ref_id": "BIBREF46"
},
{
"start": 513,
"end": 531,
"text": "(Lin et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 541,
"end": 558,
"text": "(Ji et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Recognition of entity types into coarse grain types has been explored by researchers over the past two decades. Neural approaches have brought advances in extending the prediction problem from coarse grain entity types to fine-grain entity types. Work by Ling and Weld (2012) was one of the first attempts in predicting the fine-grain entity types. The work framed the problem as multi-class multilabel classification. This work also led to an important contribution of a labeled dataset FIGER, widely used as a benchmark dataset in measuring the performance of fine-grain entity type prediction architectures. Ren et al. (2016a) introduced the method of automatic fine-grain entity typing by using hierarchical partial label embedding. Shimaoka et al. (2016) introduced a neural fine-grain entity type prediction architecture that uses semantic context with self-attention and handcrafted features to capture semantic context needed for fine-grain type prediction. Xin et al. (2018b) showed that analyzing sentences with a pre-trained language model enhanced prediction performance. Zhang et al. (2018a) introduced a document level context and signifies the importance of mention level attention mechanism along with the sentence-level context in enhancing the performance of fine-grain entity prediction. Xu and Barbosa (2018) enhanced neural fine-grain entity typing by penalizing the cross-entropy loss with hierarchical context loss for the fine-grain type prediction.",
"cite_spans": [
{
"start": 255,
"end": 275,
"text": "Ling and Weld (2012)",
"ref_id": "BIBREF22"
},
{
"start": 611,
"end": 629,
"text": "Ren et al. (2016a)",
"ref_id": "BIBREF35"
},
{
"start": 737,
"end": 759,
"text": "Shimaoka et al. (2016)",
"ref_id": "BIBREF40"
},
{
"start": 966,
"end": 984,
"text": "Xin et al. (2018b)",
"ref_id": "BIBREF51"
},
{
"start": 1084,
"end": 1104,
"text": "Zhang et al. (2018a)",
"ref_id": "BIBREF56"
},
{
"start": 1307,
"end": 1328,
"text": "Xu and Barbosa (2018)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Language modeling has seen great progress in recent times. Bengio et al. (2000) pioneered the renewed use of distributed representation for dealing with the dimensionality curse imposed by the statistical methods. Their language model used recurrent neural networks for dealing with long sequences of text. Mikolov et al. (2010) extended the idea of building the recurrent neural networkbased language models with an improved feedback mechanism of backpropagation in time.",
"cite_spans": [
{
"start": 59,
"end": 79,
"text": "Bengio et al. (2000)",
"ref_id": "BIBREF5"
},
{
"start": 307,
"end": 328,
"text": "Mikolov et al. (2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "We are not the first to examine the intersection of knowledge graph embedding and language modeling. Ristoski and Paulheim (2016) ; Cochez et al. (2017) directly embed RDF graphs using languagemodeling based techniques. Ahn et al. 2017and Logan IV et al. (2019) have more recently leveraged information from a knowledge base to improve language modeling. However, in addition to knowledge graphs and language modeling, we additionally consider fine-grain entity typing.",
"cite_spans": [
{
"start": 101,
"end": 129,
"text": "Ristoski and Paulheim (2016)",
"ref_id": "BIBREF37"
},
{
"start": 132,
"end": 152,
"text": "Cochez et al. (2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "With the success of contextualized vector representations and the availability of large-scale, pretrained language models, there have been a number of efforts aimed at improving the knowledge implicitly contained in word and sentence representations. For example, Bosselut et al. (2019) introduce COMET, which describes a framework to learn and generate rich and diverse common-sense descriptions via language models (e.g., the autoregressive GPT-2). Similarly, Zhang et al. (2019) and provide insights into aspects of LM on downstream NLP tasks. While we share the overall goal of improving knowledge representation within language modeling, the short-term goals are dif- ferent, as we focus on individual facts, rather than traditional background/commonsense knowledge, and demonstrating the complementary nature of KGE, entity typing, and LM.",
"cite_spans": [
{
"start": 264,
"end": 286,
"text": "Bosselut et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 462,
"end": 481,
"text": "Zhang et al. (2019)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "This section introduces the framework for jointly learning knowledge graph embedding (KGE), fine grain entity types (ET) and language models (LM). It uses a multi-task learning architecture built over baseline architectures for all three tasks. We begin by introducing LM-inspired knowledge graph embedding and fine grain entity typing architectures; we describe the joint learning architectures in \u00a75. Fundamentally, our approach relies on appropriate and select parameter sharing across the KGE, ET, and LM tasks in order to learn these models jointly. While joint learning or multi-task learning through shared parameters have been examined before for a number of tasks, we argue that this parameter sharing is a very effective way to improve KGE, ET, and/or LM (for a particular baseline). Its simplicity is a core benefit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "The architecture in Figure 2 embeds the factual entities and the relations. Let G be a knowledge graph (KG) with nodes V and edge E, where V is a set of entities e 1 , . . . , e |V | which are connected to each other by edges E. E is a set of K relations r 1 , . . . , r k . The architecture learns to embed the entities and relations into a (traditionally dense) vector space. Given the head entity e i , relation r k and tail entity e j , we predict whether a given triplet x i = (e i , r k , e j ) is true (in the KG).",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 28,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Knowledge Graph Embedding as a Language Model",
"sec_num": "3.1"
},
{
"text": "The model is a combination of a bi-LSTM (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997 ) and a feed-forward architecture. In the spirit of language modeling, we represent each triple x i input to the architecture as a sequence of n tokens (x i 1 , x i 2 , .., x in ). These tokens are represented in a continuous vector space by vector v it with dimension d,",
"cite_spans": [
{
"start": 40,
"end": 74,
"text": "(Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF14"
},
{
"start": 75,
"end": 101,
"text": "Schuster and Paliwal, 1997",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Embedding as a Language Model",
"sec_num": "3.1"
},
{
"text": "where v i d \u2208 R d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Embedding as a Language Model",
"sec_num": "3.1"
},
{
"text": "The bi-LSTM layer produces a learned representation of each token by maintaining two hidden states for each word: the forward state \u2212 \u2192 h it learns representation from left to right (Eq. (1)) and the backward state \u2190 \u2212 h it learns the representation from right to left (Eq. 2):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Embedding as a Language Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212 \u2192 h it = bi-LSTM(W\u2212 \u2192 h x it + V\u2212 \u2192 h \u2212 \u2192 h i t\u22121 + b\u2212 \u2192 h ) (1) \u2190 \u2212 h it = bi-LSTM(W\u2190 \u2212 h x it + V\u2190 \u2212 h \u2190 \u2212 h i t+1 + b\u2190 \u2212 h ) (2) h it = concat[ \u2212 \u2192 h it , \u2190 \u2212 h it ].",
"eq_num": "(3)"
}
],
"section": "Knowledge Graph Embedding as a Language Model",
"sec_num": "3.1"
},
{
"text": "The forward and the backward states of the bi-LSTM layer are concatenated to produce a sequentially encoded representation h i for each time step t given the input sequence x i . The bi-LSTM weight matrices W and V and b are learned during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Embedding as a Language Model",
"sec_num": "3.1"
},
{
"text": "In principle the bi-LSTMs can be stacked, though we found not stacking to be empirically effective. Though the bi-LSTM produces a sequence of hidden states, we summarize the information captured by it in a single, \"final\" state C final . This state is then used to represent the information encoded by the whole sequence for the subsequent classification task. We let the rightmost state represent the \"final\" state, i.e., C final = h in . 4 The feed-forward architecture is a multi-layer perceptron with L = 3 rectified linear hidden layers (ReLU). The input to the feed-forward layer is a learned final cell state representation C final from the bi-LSTM sequence encoder. The feedforward process captures the information from the learned sequence encoder and outputs a transformed representation z l from the final output layer:",
"cite_spans": [
{
"start": 440,
"end": 441,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Embedding as a Language Model",
"sec_num": "3.1"
},
{
"text": "z l = ReLU(W l z l\u22121 + b l )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Embedding as a Language Model",
"sec_num": "3.1"
},
{
"text": ", with z 0 = C final , and layer-specific weights W l and biases b l .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Embedding as a Language Model",
"sec_num": "3.1"
},
{
"text": "The output representation z L is then used to calculate the semantic matching score for the factual input x t . This score is calculated by incorporating the learned representation z L with the sequentially encoded final sequence step representation h t . The product is then passed through a sigmoid activation function, f (x t ; \u03b8) = \u03c3(z T l h t ), where \u03b8 is a collection of network parameters used for training the language model-inspired knowledge graph embedding architecture. These parameters are jointly learned by minimizing a weighted cross-entropy loss with 2 regularization (Eq. (4)):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Embedding as a Language Model",
"sec_num": "3.1"
},
{
"text": "J(\u03b8) = \u2212 1 N N i=0 k \u2022 y i \u2022 log(f (x t ; \u03b8))+ (1 \u2212 y i ) \u2022 log(1 \u2212 f (x t ; \u03b8)) + \u03bb||\u03b8|| 2 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Embedding as a Language Model",
"sec_num": "3.1"
},
{
"text": "where k is the weight assigned to the positive samples during the training, y i represents the original labels, and \u03bb is the regularization parameter. As a result of our KGE method, we do not produce or store single, canonical representations of entities and relations. We argue that the lack of a canonical entity embedding is a large benefit of our model. First, it is consistent with the push for contextualized embeddings. Second, we believe that, even in a KG, an entity's precise meaning or representation should depend on the fact/tuple that is being considered. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Embedding as a Language Model",
"sec_num": "3.1"
},
{
"text": "Recognizing the type of the given entity has been an integral part of tasks like knowledge base completion , question answering and co-reference resolution. Ling and Weld (2012) extended the problem of entity type prediction to fine-grain entity types. Given an input vector V x for entity x, type embedding matrix \u03b8, the function g predicts all the possible entity types t for given entity x as g(",
"cite_spans": [
{
"start": 157,
"end": 177,
"text": "Ling and Weld (2012)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural-Fine Grain Entity Type Prediction",
"sec_num": "3.2"
},
{
"text": "V x ; \u03b8) = \u03b8 T V x .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural-Fine Grain Entity Type Prediction",
"sec_num": "3.2"
},
{
"text": "The model learns the parameters \u03b8 by optimizing the hinge loss to classify a given entity into all the possible types T: Figure 3 : The joint learning architecture for training KGE and entity typing takes in both factual triplets and context information for an entity. Parameters of the architecture are trained to learn both the factual information as well the corresponding entity types.",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 129,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural-Fine Grain Entity Type Prediction",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J(\u03b8) = T t=0 max(0, 1 \u2212 y t \u2022 g(V x ; \u03b8)).",
"eq_num": "(5"
}
],
"section": "Neural-Fine Grain Entity Type Prediction",
"sec_num": "3.2"
},
{
"text": "An entity is predicted to be of type t if g(x; \u03b8) is greater than a given threshold value \u03c4 (typically, \u03c4 = 0.5, though it can be set empirically).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural-Fine Grain Entity Type Prediction",
"sec_num": "3.2"
},
{
"text": "The architecture in Figure 3 shows different sets of embedding-based features used to predict the entity type t. Word-level features and context level features-word spans to the left and right of the entity-are taken into consideration. The feature design used here is similar to the design of the features introduced by Shimaoka et al. (2016) . We note that our method neither assumes nor requires any type hierarchy, though including a type hierarchy is an avenue for future exploration.",
"cite_spans": [
{
"start": 321,
"end": 343,
"text": "Shimaoka et al. (2016)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 20,
"end": 28,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural-Fine Grain Entity Type Prediction",
"sec_num": "3.2"
},
{
"text": "We encode a mention representation m as the average of word embedding vectors u i for all words i present in the given entity e: m = 1 |n| n i=0 u i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Encoder",
"sec_num": null
},
{
"text": "The contextual representation for the given mention e is performed by dividing into left context l c and right context r c , where the left context is all the words present on the left of the given entity e, and the right context contains all the words present to the right of the given entity e. The left and right context are encoded by passing the context through a bi-LSTM sequence encoder (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997) . The sequence encoder is similar to the one used by Zhang et al. (2018a) . The outputs of the bi-LSTM sequence encoder are the sequential vector representation from both forward (left-to-right) and backward pass Attention We use an attention mechanism to reweight contextualized token embeddings. The attention layer, similar to that of Shimaoka et al. (2016) , is a 2 layer feed forward neural architecture where the attention weight for each time step of the context representation is learned given the parameter matrix W a and W s :",
"cite_spans": [
{
"start": 394,
"end": 428,
"text": "(Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF14"
},
{
"start": 429,
"end": 456,
"text": "Schuster and Paliwal, 1997)",
"ref_id": "BIBREF39"
},
{
"start": 510,
"end": 530,
"text": "Zhang et al. (2018a)",
"ref_id": "BIBREF56"
},
{
"start": 795,
"end": 817,
"text": "Shimaoka et al. (2016)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Encoder",
"sec_num": null
},
{
"text": "(right-to-left), (l f , l b ) = BiLSTM(l c , h, h t\u22121 ) and (r f , r b ) = BiLSTM(r c , h, h t\u22121 ), where (l f , l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Encoder",
"sec_num": null
},
{
"text": "a i = softmax(W s tanh(C i \u2022 W a )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Encoder",
"sec_num": null
},
{
"text": "The context representation is a weighted sum of attention and the context representation, C rep = t i=0 a i \u2022 C i . The attention mechanism used here differs from Shimaoka et al. (2016) such that in our work the contextual embeddings share the same attention parameters. The features extracted from the mention encoder m and attention weighted context encoder C r are concatenated to form a learned representation V = concat(m i , C rep ) that is passed to the feed-forward architecture for classification.",
"cite_spans": [
{
"start": 163,
"end": 185,
"text": "Shimaoka et al. (2016)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Encoder",
"sec_num": null
},
{
"text": "The feed-forward architecture is a 3-layer neural architecture with a batch normalization layer (Ioffe and Szegedy, 2015) present between the first and the second layers with a ReLU activation (Nair and Hinton, 2010) . The input to the feed-forward layer is a concatenated representation from the context and mention encoders. The feed-forward process captures the information from the learned features and outputs a transformed representation",
"cite_spans": [
{
"start": 96,
"end": 121,
"text": "(Ioffe and Szegedy, 2015)",
"ref_id": "BIBREF16"
},
{
"start": 193,
"end": 216,
"text": "(Nair and Hinton, 2010)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Encoder",
"sec_num": null
},
{
"text": "q l = max(0, V l \u2022 q l\u22121 + d l )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Encoder",
"sec_num": null
},
{
"text": "from the final output layer to classify the given mention into the corresponding entity types, where V l , d l are the weights and bias for the hidden layer unit l respectively. We initialize q 0 = C r .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Encoder",
"sec_num": null
},
{
"text": "The language model predicts the next possible word based on the previous inputs, as p(w n |w 1 , w 2 , ...w n\u22121 ) = i P (w n |w n\u2212k , ....w n\u22121 ). We use a simple 94 Method WN11 FB13 Avg NTN (Socher et al., 2013) 86.2 90.0 88.1 TransE 75.9 81.5 78.7 TransH (Wang et al., 2014) 78.8 83.3 81.1 TransR (Lin et al., 2015) 85.9 82.5 84.2 TransD (Ji et al., 2015) 86.4 89.1 87.8 TEKE (Wang and Li, 2016) 86.1 84.2 85.2 TransG (Xiao et al., 2016) 87.4 87.3 87.4 TranSparse (Ji et al., 2016) 86.4 88.2 87.4 DistMult (Yang et al., 2014) 87.1 86.2 86.7 DistMult-HRS (Zhang et al., 2018b) 88.9",
"cite_spans": [
{
"start": 191,
"end": 212,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF41"
},
{
"start": 257,
"end": 276,
"text": "(Wang et al., 2014)",
"ref_id": "BIBREF46"
},
{
"start": 299,
"end": 317,
"text": "(Lin et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 340,
"end": 357,
"text": "(Ji et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 378,
"end": 397,
"text": "(Wang and Li, 2016)",
"ref_id": "BIBREF47"
},
{
"start": 420,
"end": 439,
"text": "(Xiao et al., 2016)",
"ref_id": "BIBREF49"
},
{
"start": 466,
"end": 483,
"text": "(Ji et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 508,
"end": 527,
"text": "(Yang et al., 2014)",
"ref_id": "BIBREF53"
},
{
"start": 556,
"end": 577,
"text": "(Zhang et al., 2018b)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model",
"sec_num": "3.3"
},
{
"text": "89.0 89.0 AATE 88.0 87.2 87.6 ConvKB (Nguyen et al., 2017) 87.6 88.8 88.2 DOLORES (Wang et al., 2018) 87 LSTM to learn the sequential structure of the text.",
"cite_spans": [
{
"start": 37,
"end": 58,
"text": "(Nguyen et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 82,
"end": 101,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model",
"sec_num": "3.3"
},
{
"text": "The input to the joint learning architectures are the pre-trained GloVe embedding vectors trained on 840 billion words (Pennington et al., 2014) . The parameters of the baseline and the joint learning architecture are learned with Stochastic Gradient Descent and Adam (Kingma and Ba, 2014) as a learning rate optimizer. The training of the joint learning networks is performed with alternating optimization. The loss functions of the respective tasks are optimized at each alternate epoch/ interval. The hyper-parameters for training these joint architecture are chosen manually for the bestperforming models on validation sets.",
"cite_spans": [
{
"start": 119,
"end": 144,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "Data For a direct comparison of the performance as possible, we use previously studied datasets. We evaluate KG triple classification using the standard datasets of WordNet 11 (WN11) and Freebase 13 (FB13). WN11 (Strapparava and Valitutti, 2004 ) is a publicly available lexical graph of synsets (synonyms). Freebase (Bollacker et al., 2008 ) is a collaborative ontology consisting of factual tuples of entities related to each other through semantic relation. While recent work has advocated for examining variants and other derivatives of these datasets such as FB15k-237 and WN18RR (Toutanova and Chen, 2015; Dettmers et al., 2018; Padia et al., 2019, i.a.) , there is a relative lack of previous experimental work on these newer datasets. Given space limitations, and in order to compare to the vast majority of previous work, we chose to report on the more common WN11 and FB13. We evaluate fine grain entity type prediction on the well-studied OntoNotes (Hovy et al., 2006) and FIGER (Ling and Weld, 2012) datasets. The OntoNotes dataset used here is a manually curated dataset by Gillick et al. (2014) Lastly, we evaluate the joint KGE and LM on WikiFact (Ahn et al., 2017), built using the facts from Freebase and Wikipedia descriptions. The content of the dataset is limited to Film/Actor/ from Freebase. Further the anchor fact defined in the text of the dataset are not used for training the joint model. The description of the entities in the original dataset contain both the summary and the body from Wikipedia. The current study is performed by using the description from the summary section defined in the dataset. The joint model is trained and evaluated with the split of 80/10/10 for train, validation and test sets, respectively.",
"cite_spans": [
{
"start": 212,
"end": 244,
"text": "(Strapparava and Valitutti, 2004",
"ref_id": "BIBREF42"
},
{
"start": 317,
"end": 340,
"text": "(Bollacker et al., 2008",
"ref_id": "BIBREF6"
},
{
"start": 585,
"end": 611,
"text": "(Toutanova and Chen, 2015;",
"ref_id": "BIBREF44"
},
{
"start": 612,
"end": 634,
"text": "Dettmers et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 635,
"end": 660,
"text": "Padia et al., 2019, i.a.)",
"ref_id": null
},
{
"start": 960,
"end": 979,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 990,
"end": 1011,
"text": "(Ling and Weld, 2012)",
"ref_id": "BIBREF22"
},
{
"start": 1087,
"end": 1108,
"text": "Gillick et al. (2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "Metrics KGE triple classification is evaluated through accuracy. The entity type model's performance is evaluated based on three common entity typing metrics-Strict F1, Loose Macro F1 and Loose Micro F1 (Ling and Weld, 2012)-while language modeling is measured by perplexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "Previous Work as Baselines When possible, we directly compare our model's performance to that of previously published work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "Strict F1 Macro F1 Micro F1 AFET (Ren et al., 2016a) 20.32 54.51 52.61 KB only (Xin et al., 2018a) 35.12 70.49 63.36 HNM (Dong et al., 2015) 34.88 64.37 68.39 SA (Shimaoka et al., 2016) 42.77 72.40 74.91 MA (KNET) (Xin et al., 2018a) 41.58 72.66 75.72 KA (KNET) (Xin et al., 2018a) 45 Table 3 : We compare previous techniques on the WIKI-AUTO dataset for fine-grain typing. The proposed method outperforms all previous, comparable techniques. While techniques that utilize disambiguation to improve the results on the knowledge attention (e.g., KA + D (KNET) from Xin et al. (2018a)) can yield very modest improvements, e.g., to 77 micro F1, due to the extra information used, those results are not directly comparable to the proposed model.",
"cite_spans": [
{
"start": 33,
"end": 52,
"text": "(Ren et al., 2016a)",
"ref_id": "BIBREF35"
},
{
"start": 79,
"end": 98,
"text": "(Xin et al., 2018a)",
"ref_id": "BIBREF50"
},
{
"start": 121,
"end": 140,
"text": "(Dong et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 162,
"end": 185,
"text": "(Shimaoka et al., 2016)",
"ref_id": "BIBREF40"
},
{
"start": 214,
"end": 233,
"text": "(Xin et al., 2018a)",
"ref_id": "BIBREF50"
},
{
"start": 262,
"end": 281,
"text": "(Xin et al., 2018a)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 285,
"end": 292,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": null
},
{
"text": "Strict F1 Macro F1 Micro F1 AFET (Ren et al., 2016a) 18.00 56.33 56.52 KB only (Xin et al., 2018a) 17.00 63.00 40.52 HNM (Dong et al., 2015) 15.00 64.75 65.30 SA (Shimaoka et al., 2016) 18.00 69.44 70.14 MA (KNET) (Xin et al., 2018a) 26.00 71.19 72.08 KA (KNET) (Xin et al., 2018a) 23.00 71.10 71.67 Joint Model-Proposed 25.00 73.40 74.43 Table 4 : We compare previous techniques on Wiki-MAN dataset for fine-grain entity type classification.",
"cite_spans": [
{
"start": 33,
"end": 52,
"text": "(Ren et al., 2016a)",
"ref_id": "BIBREF35"
},
{
"start": 79,
"end": 98,
"text": "(Xin et al., 2018a)",
"ref_id": "BIBREF50"
},
{
"start": 121,
"end": 140,
"text": "(Dong et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 162,
"end": 185,
"text": "(Shimaoka et al., 2016)",
"ref_id": "BIBREF40"
},
{
"start": 214,
"end": 233,
"text": "(Xin et al., 2018a)",
"ref_id": "BIBREF50"
},
{
"start": 262,
"end": 281,
"text": "(Xin et al., 2018a)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 339,
"end": 346,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": null
},
{
"text": "This section presents the results of our basic KGE, entity typing models, and the joint learning architecture and their comparison to previous methods. The models were trained using either a 16GB V100 or 11GB 2080 TI GPU (single GPU training only).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "The proposed knowledge graph embedding architecture ( \u00a73.1) is trained for triple classification task: given an input triple x i , predict whether the fact it represents is true or not. Table 1 provides an overview of performance of our architecture in comparison to previously studies approaches, obtained from the corresponding paper.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 193,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "The Effectiveness of a LM-inspired KGE",
"sec_num": "5.1"
},
{
"text": "Examining the results on WN11 and FB13, we see that in all but one case our approach improves upon the state of the art performance on triple classification task; in that one case (DistMult-HRS on WN11) our model was very competitive. These strong results support our hypothesis that language modeling principles can be an effective knowledge graph embedding technique. In examining perrelation performance on both WN11 and FB13, we observed an increase in the lower bound of accuracy results for relationships on both WordNet and Freebase, compared to Socher et al. (2013) . We see a rise in accuracy from Socher et al. (2013) 's 75.5% to 81% for the (domain region) relation from WordNet. On Freebase, we see performance for the institution relation goes from 77.2% to 80.9% with the current architecture.",
"cite_spans": [
{
"start": 553,
"end": 573,
"text": "Socher et al. (2013)",
"ref_id": "BIBREF41"
},
{
"start": 607,
"end": 627,
"text": "Socher et al. (2013)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Effectiveness of a LM-inspired KGE",
"sec_num": "5.1"
},
{
"text": "Recently, Yao et al. (2019) presented KG-BERT, which uses a pretrained BERT model to encode and classify triples. While this approach is empirically powerful, and surpasses our approach, we note that due to the limited training context of the current architecture, directly comparing those triple classification results with ours would be mischaracterizing the strengths and limitations of both approaches. Considering the training complexity and costs of transformer networks, our model presents an appealing balance between efficacy and efficiency.",
"cite_spans": [
{
"start": 10,
"end": 27,
"text": "Yao et al. (2019)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Effectiveness of a LM-inspired KGE",
"sec_num": "5.1"
},
{
"text": "Our novel neural fine grain entity type prediction techniques is compared with previous approaches in Table 2 . The neural architecture provides an improvement on FIGER in F1. To have a direct comparison, whe datasets used for the experiments are same as used by Shimaoka et al. (2016) and Zhang et al. (2018a) . Our method uses a margin based loss function to learn entity types, and outperforms all the previous methods (Abhishek et al., 2017; Ren et al., 2016a,b ) that learn fine grain entity type prediction through margin base loss functions and evaluated on the same datasets.",
"cite_spans": [
{
"start": 263,
"end": 285,
"text": "Shimaoka et al. (2016)",
"ref_id": "BIBREF40"
},
{
"start": 290,
"end": 310,
"text": "Zhang et al. (2018a)",
"ref_id": "BIBREF56"
},
{
"start": 422,
"end": 445,
"text": "(Abhishek et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 446,
"end": 465,
"text": "Ren et al., 2016a,b",
"ref_id": null
}
],
"ref_spans": [
{
"start": 102,
"end": 109,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "The Effectiveness of Entity Typing with KGE-Inspired Models",
"sec_num": "5.2"
},
{
"text": "Building on the baseline models, the joint model (Figure 3 ) addresses the implicit constraint given in the knowledge graph. The architecture learns to correlate the mention entities with the entities present in the context to addresses the problems of \"context-entity separation\" and \"text knowledge separation,\" as defined by Xin et al. (2018a) . The joint architecture is evaluated on the WikiAuto and WikiMan datasets. The model is trained with combination of FB15K dataset and WikiAuto to learn the both the factual information along with the entity typing structure. Tabs. 3 and 4 provide an overview of results from current method and it comparisons with the previous techniques. We trained and tested the joint model on a combination of datasets for KGE and FNER; see Table Figure 4 : The architecture for joint learning of knowledge graph embedding with language model. We use an LSTM for the LM component, and a bi-LSTM for the KGE component. The LM LSTM and the forward portion of the bi-LSTM are the same, allowing the transfer of knowledge. The architecture takes in as input the whole sentence and the triplet to learn the semantic structure and factual information from the knowledge base. of learning fine-grain entity types and knowledge graph embedding jointly with steady performances on either task with respect to their baselines.",
"cite_spans": [
{
"start": 328,
"end": 346,
"text": "Xin et al. (2018a)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 49,
"end": 58,
"text": "(Figure 3",
"ref_id": null
},
{
"start": 776,
"end": 781,
"text": "Table",
"ref_id": null
},
{
"start": 782,
"end": 790,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Effectiveness of Joint KGE and Entity Typing",
"sec_num": "5.3"
},
{
"text": "We examine the complementary nature of LM and KGE on the WikFacts dataset introduced by Ahn et al. (2017), which contains both sentences and KGE-style tuples. Figure 4 shows the architecture for jointly learning to embed a KG and model language. We use a single-layer LSTM (unidirectional: left-to-right) for language modeling, though the core KGE architecture relies on an bi-LSTM. We unify these by ensuring that the LM LSTM and the left-to-right portion of the KGE bi-LSTM use the same weights. We compare this joint approach to the same models trained separately and inde- We summarize the results from the joint KGE+LM experiments, learned from WikiFacts with a 70k word vocabulary. In 6a we provide results for the architecture shown in Figure 4 (a bi-LSTM KGE, whose forward cells are the cells of a unidirectional LSTM LM). In 6b, we provide results where we replace the bi-LSTM KGE with LSTM LM.",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 167,
"text": "Figure 4",
"ref_id": null
},
{
"start": 743,
"end": 751,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Effectiveness of Joint KGE and Language Modeling",
"sec_num": "5.4"
},
{
"text": "pendently, without any weight sharing, evaluating the LMs on perplexity (lower is better) and KG prediction accuracy (higher is better). We use a vocabulary of the 70k most frequent words. As Table 6a shows, while there is a very slight decrease in KG prediction accuracy, the distinct improvement in the performance of language model over the baseline LM demonstrates that joint learning is particularly effective for language modeling. This suggests that even simple joint learning can be an effective way of using stated knowledge to improve language modeling.",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 200,
"text": "Table 6a",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "The Effectiveness of Joint KGE and Language Modeling",
"sec_num": "5.4"
},
{
"text": "While joint learning allowed the KG to help the LM, the reverse was not true. We speculate that this is in part because, from a language modeling perspective, the KGE model is able to consider both the forward and backward components. To test this, we replace the KGE bi-LSTM with the same unidirectional LSTM used by the LM. We show these results in Table 6b . Similar to the previous results, Sentences Input sentence stephen percy steve harris born 12 march 1956 is an english musician and songwriter known as the bassist occasional keyboardist backing vocalist primary songwriter and founder of the british heavy metal band iron maiden he is the only member of iron maiden to have remained in the band since their inception in 1975 and along with guitarist dave murray to have appeared on all of their albums Output (Joint model) joseph john james unk born 5 april 1949 is an english musician and actor known as the greatest and guitarist the vocalist guitarist songwriter and guitarist of the band heavy metal band the band he is the founding child of the team band have been by the band until its death in 2003 and toured with unk unk unk they have appeared in one of Output (baseline) peter baron dickie unk born 11 august 1943 is an english singer and best and as the most and and and and lead songwriter and member of the heavy rock rock band unk side he is the third singer of the band band have been with the band since its breakup in 1992 while cofounded with with dave tended has have collaborated on hundreds of their films Table 7 : We provide an example of the sentence predicted by the language model jointly learned with knowledge graph embedding and the independently trained language model. Notice how some implicit constraints, learned from the KGE, are transferred to the language model. KGE allowed LM perplexity to decrease significantly. However, we also see that the LM yielded a 3 point absolute improvement in KG prediction, supporting our hypothesis.",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 359,
"text": "Table 6b",
"ref_id": "TABREF11"
},
{
"start": 1538,
"end": 1545,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Effectiveness of Joint KGE and Language Modeling",
"sec_num": "5.4"
},
{
"text": "To further demonstrate how our joint learning method improves the semantic understanding of the language, we qualitatively examine the generative capacity of these LMs in Table 7 . This provides an example of how joint training a KG and LM can improve output over a singly-trained LM on the same language data, and suggests that joint learning allows transfer of some implicit constraints in the language by learning the underlying relationships between the entities. While both are over-reliant on conjunctive structure, notice how the singly-trained baseline LM starts off alright, but then as the generation continues, loses coherence. Meanwhile, the jointly trained model maintains more coherence for longer. This suggests the KGE training is successfully transferring appropriate thematic/factive knowledge to the LM.",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 178,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Effectiveness of Joint KGE and Language Modeling",
"sec_num": "5.4"
},
{
"text": "This work proposes a joint learning framework for learning real value representations of words, entities, and relations in a shared embedding space. Joint learning of factual representation with contextual understanding shows improvement in the learning of entity types. Learning the language model with knowledge graph embedding simultaneously enhances the performance on both modeling tasks. Our results suggest that language modeling could accelerate the study of schema-free approaches to both KGE and FNER, and strong performance can be obtained with comparatively simpler, resourcestarved language models. This has promising implications for low-resource, and few-shot, and/or domain-specific information extraction needs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our code is available at https://github.com/rajathpatel23/ joint-kge-fnet-lm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In early experiments we tried other approaches, such as averaging all hidden representation to compute the final state (Cfinal = 1 n t hi t . These caused neither large improvements nor decreases in performance. As a result, we advocate here for the simpler computation of Cfinal = hi n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If a single representation is needed, note that because we tokenize entity, types, relations, and arguments into words, we could generate a single representation by combining the, e.g., entity's individual word embeddings according to the LM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank members and affiliates of the UMBC CSEE Department, including Ankur Padia, Tim Finin, and Karuna Joshi. Some experiments were conducted on the UMBC HPCF. We'd also like to thank the reviewers for their comments and suggestions.This material is also based on research that is in part supported ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fine-grained entity type classification by jointly learning representations and label embeddings",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Abhishek",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Awekar",
"suffix": ""
}
],
"year": 2017,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek, Ashish Anand, and Amit Awekar. 2017. Fine-grained entity type classification by jointly learning representations and label embeddings. ArXiv, abs/1702.06709.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A neural knowledge language model",
"authors": [
{
"first": "Heeyoul",
"middle": [],
"last": "Sungjin Ahn",
"suffix": ""
},
{
"first": "Tanel",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "P\u00e4rnamaa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungjin Ahn, Heeyoul Choi, Tanel P\u00e4rnamaa, and Yoshua Bengio. 2017. A neural knowledge language model. ArXiv, abs/1608.00318.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Accurate text-enhanced knowledge graph representation learning",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "An",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xianpei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo An, Bo Chen, Xianpei Han, and Le Sun. 2018. Accurate text-enhanced knowledge graph represen- tation learning. In NAACL-HLT.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Dbpedia: A nucleus for a web of open data",
"authors": [
{
"first": "S\u00f6ren",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Kobilarov",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Cyganiak",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"G"
],
"last": "Ives",
"suffix": ""
}
],
"year": 2007,
"venue": "ISWC/ASWC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f6ren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In ISWC/ASWC.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Tucker: Tensor factorization for knowledge graph completion",
"authors": [
{
"first": "Ivana",
"middle": [],
"last": "Balazevic",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Hospedales",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "5188--5197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivana Balazevic, Carl Allen, and Timothy Hospedales. 2019. Tucker: Tensor factorization for knowledge graph completion. In EMNLP-IJCNLP, pages 5188- 5197.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2000,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2000. A neural probabilistic lan- guage model. J. Mach. Learn. Res., 3:1137-1155.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Freebase: a collaboratively created graph database for structuring human knowledge",
"authors": [
{
"first": "Kurt",
"middle": [
"D"
],
"last": "Bollacker",
"suffix": ""
},
{
"first": "C",
"middle": [
"J"
],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "SIGMOD Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt D. Bollacker, C. J. Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In SIGMOD Conference.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Translating embeddings for modeling multirelational data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garc\u00eda-Dur\u00e1n",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garc\u00eda- Dur\u00e1n, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In NIPS.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "COMET: Commonsense transformers for automatic knowledge graph construction",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for au- tomatic knowledge graph construction. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Global rdf vector space embeddings",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Cochez",
"suffix": ""
},
{
"first": "Petar",
"middle": [],
"last": "Ristoski",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Heiko",
"middle": [],
"last": "Paulheim",
"suffix": ""
}
],
"year": 2017,
"venue": "International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "190--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Cochez, Petar Ristoski, Simone Paolo Ponzetto, and Heiko Paulheim. 2017. Global rdf vector space embeddings. In International Seman- tic Web Conference, pages 190-207. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Convolutional 2d knowledge graph wmbeddings",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Dettmers",
"suffix": ""
},
{
"first": "Pasquale",
"middle": [],
"last": "Minervini",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph wmbeddings. In AAAI.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A hybrid neural model for type classification of entity mentions",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2015,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Furu Wei, Hong Sun, Ming Zhou, and Ke Xu. 2015. A hybrid neural model for type classification of entity mentions. In IJCAI.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Contextdependent fine-grained entity type tagging",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Nevena",
"middle": [],
"last": "Lazic",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Kirchner",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Huynh",
"suffix": ""
}
],
"year": 2014,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh. 2014. Context- dependent fine-grained entity type tagging. ArXiv, abs/1412.1820.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9:1735- 1780.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Ontonotes: The 90% solution",
"authors": [
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [
"A"
],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [
"M"
],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard H. Hovy, Mitchell P. Marcus, Martha Palmer, Lance A. Ramshaw, and Ralph M. Weischedel. 2006. Ontonotes: The 90% solution. In HLT-NAACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2015,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network train- ing by reducing internal covariate shift. ArXiv, abs/1502.03167.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Knowledge graph embedding via dynamic mapping matrix",
"authors": [
{
"first": "Guoliang",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dy- namic mapping matrix. In ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Knowledge graph completion with adaptive sparse transfer matrix",
"authors": [
{
"first": "Guoliang",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2016,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2016. Knowledge graph completion with adaptive sparse transfer matrix. In AAAI.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Non-negative tensor factorization with rescal",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Krompa\u00df",
"suffix": ""
},
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Xueyan",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Volker",
"middle": [],
"last": "Tresp",
"suffix": ""
}
],
"year": 2013,
"venue": "Tensor Methods for Machine Learning, ECML workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Krompa\u00df, Maximilian Nickel, Xueyan Jiang, and Volker Tresp. 2013. Non-negative tensor factor- ization with rescal. In Tensor Methods for Machine Learning, ECML workshop.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning entity and relation embeddings for knowledge graph completion",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2015,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation em- beddings for knowledge graph completion. In AAAI.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Fine-grained entity recognition",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2012,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ling and Daniel S. Weld. 2012. Fine-grained en- tity recognition. In AAAI.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Barack's wife hillary: Using knowledge-graphs for fact-aware language modeling",
"authors": [
{
"first": "L",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Logan",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert L. Logan IV, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. 2019. Barack's wife hillary: Using knowledge-graphs for fact-aware language modeling. In ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Label embedding for zero-shot fine-grained named entity typing",
"authors": [
{
"first": "Yukun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Sa",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yukun Ma, Erik Cambria, and Sa Gao. 2016. Label embedding for zero-shot fine-grained named entity typing. In COLING.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1s",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ja\u0148",
"middle": [],
"last": "Cernock\u00fd",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "INTER-SPEECH",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Luk\u00e1s Burget, Ja\u0148 Cernock\u00fd, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTER- SPEECH.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Regularizing knowledge graph embeddings via equivalence and inversion axioms",
"authors": [
{
"first": "Pasquale",
"middle": [],
"last": "Minervini",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Costabello",
"suffix": ""
},
{
"first": "Emir",
"middle": [],
"last": "Mu\u00f1oz",
"suffix": ""
},
{
"first": "V\u00edt",
"middle": [],
"last": "Nov\u00e1cek",
"suffix": ""
},
{
"first": "Pierre-Yves",
"middle": [],
"last": "Vandenbussche",
"suffix": ""
}
],
"year": 2017,
"venue": "ECML/PKDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pasquale Minervini, Luca Costabello, Emir Mu\u00f1oz, V\u00edt Nov\u00e1cek, and Pierre-Yves Vandenbussche. 2017. Regularizing knowledge graph embed- dings via equivalence and inversion axioms. In ECML/PKDD.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Rectified linear units improve restricted boltzmann machines",
"authors": [
{
"first": "Vinod",
"middle": [],
"last": "Nair",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2010,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In ICML.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A novel embedding model for knowledge base completion based on convolutional neural network",
"authors": [
{
"first": "Tu",
"middle": [
"Dinh"
],
"last": "Dai Quoc Nguyen",
"suffix": ""
},
{
"first": "Dat",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Dinh",
"middle": [
"Q"
],
"last": "Quoc Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Phung",
"suffix": ""
}
],
"year": 2017,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Q. Phung. 2017. A novel embed- ding model for knowledge base completion based on convolutional neural network. In NAACL-HLT.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A three-way model for collective learning on multi-relational data",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Hans-Peter",
"middle": [],
"last": "Volker Tresp",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kriegel",
"suffix": ""
}
],
"year": 2011,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In ICML.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Knowledge graph fact prediction via knowledge-enriched tensor factorization",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Padia",
"suffix": ""
},
{
"first": "Konstantinos",
"middle": [],
"last": "Kalpakis",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Ferraro",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"W"
],
"last": "Finin",
"suffix": ""
}
],
"year": 2019,
"venue": "J. Web Semant",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankur Padia, Konstantinos Kalpakis, Francis Ferraro, and Timothy W. Finin. 2019. Knowledge graph fact prediction via knowledge-enriched tensor factoriza- tion. J. Web Semant., 59.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In EMNLP.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In NAACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Knowledge enhanced contextual word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Logan",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Vidur",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In EMNLP-IJCNLP.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Language models as knowledge bases",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bakhtin",
"suffix": ""
},
{
"first": "Yuxiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In EMNLP-IJCNLP.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Afet: Automatic finegrained entity typing by hierarchical partial-label embedding",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Wenqi",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Ren, Wenqi He, Meng Qu, Lifu Huang, Heng Ji, and Jiawei Han. 2016a. Afet: Automatic fine- grained entity typing by hierarchical partial-label embedding. In EMNLP.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Label noise reduction in entity typing by heterogeneous partial-label embedding",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Wenqi",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Clare",
"middle": [
"R"
],
"last": "Voss",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2016,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, and Jiawei Han. 2016b. Label noise reduction in entity typing by heterogeneous partial-label embed- ding. ArXiv, abs/1602.05307.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "RDF2Vec: RDF graph embeddings for data mining",
"authors": [
{
"first": "Petar",
"middle": [],
"last": "Ristoski",
"suffix": ""
},
{
"first": "Heiko",
"middle": [],
"last": "Paulheim",
"suffix": ""
}
],
"year": 2016,
"venue": "International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "498--514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petar Ristoski and Heiko Paulheim. 2016. RDF2Vec: RDF graph embeddings for data mining. In Inter- national Semantic Web Conference, pages 498-514. Springer.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Adaptive statistical language modeling: A maximum entropyapproach",
"authors": [
{
"first": "Ronald",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald Rosenfeld. 1994. Adaptive statistical language modeling: A maximum entropyapproach. Ph.D. the- sis, Computer Science Department, Carnegie Mel- lon University.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Kuldip",
"middle": [
"K"
],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Trans. Signal Processing",
"volume": "45",
"issue": "",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kuldip K. Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Trans. Sig- nal Processing, 45:2673-2681.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "An attentive neural architecture for fine-grained entity type classification",
"authors": [
{
"first": "Sonse",
"middle": [],
"last": "Shimaoka",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2016,
"venue": "AKBC@NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2016. An attentive neural archi- tecture for fine-grained entity type classification. In AKBC@NAACL-HLT.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Reasoning with neural tensor networks for knowledge base completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In NIPS.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Wordnet affect: an affective extension of wordnet",
"authors": [
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Valitutti",
"suffix": ""
}
],
"year": 2004,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlo Strapparava and Alessandro Valitutti. 2004. Wordnet affect: an affective extension of wordnet. In LREC.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Yago: a core of semantic knowledge",
"authors": [
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Gjergji",
"middle": [],
"last": "Kasneci",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In WWW '07.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Observed versus latent features for knowledge base and text inference",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality",
"volume": "",
"issue": "",
"pages": "57--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compo- sitionality, pages 57-66.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Dolores: Deep contextualized knowledge graph embeddings",
"authors": [
{
"first": "Haoyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoyu Wang, Vivek Kulkarni, and William Yang Wang. 2018. Dolores: Deep contextualized knowledge graph embeddings. ArXiv, abs/1811.00147.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Knowledge graph embedding by translating on hyperplanes",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianlin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zhigang",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zhigang Chen. 2014. Knowledge graph embedding by trans- lating on hyperplanes. In AAAI.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Text-enhanced representation learning for knowledge graph",
"authors": [
{
"first": "Zhigang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Juan-Zi",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhigang Wang and Juan-Zi Li. 2016. Text-enhanced representation learning for knowledge graph. In IJ- CAI.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Connecting language and knowledge bases with embedding models for relation extraction",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Antoine Bordes, Oksana Yakhnenko, and Nicolas Usunier. 2013. Connecting language and knowledge bases with embedding models for re- lation extraction. In EMNLP.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Transg : A generative model for knowledge graph embedding",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. Transg : A generative model for knowledge graph embedding. In ACL.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Improving neural fine-grained entity typing with knowledge attention",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Xin",
"suffix": ""
},
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji Xin, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2018a. Improving neural fine-grained entity typing with knowledge attention. In AAAI.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Put it back: Entity typing with language model enhancement",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Xin",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji Xin, Hao Zhu, Xu Han, Zhiyuan Liu, and Maosong Sun. 2018b. Put it back: Entity typing with language model enhancement. In EMNLP.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Neural finegrained entity type classification with hierarchyaware loss",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Denilson",
"middle": [],
"last": "Barbosa",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Xu and Denilson Barbosa. 2018. Neural fine- grained entity type classification with hierarchy- aware loss. In NAACL-HLT.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Embedding entities and relations for learning and inference in knowledge bases",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Wen Tau Yih",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bishan Yang, Wen tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. CoRR, abs/1412.6575.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Kgbert: Bert for knowledge graph completion",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chengsheng",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Kg- bert: Bert for knowledge graph completion. ArXiv, abs/1909.03193.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Fine-grained entity typing through increased discourse context and adaptive classification thresholds",
"authors": [
{
"first": "Sheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "*SEM@NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sheng Zhang, Kevin Duh, and Benjamin Van Durme. 2018a. Fine-grained entity typing through increased discourse context and adaptive classification thresh- olds. In *SEM@NAACL-HLT.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Knowledge graph embedding with hierarchical relation structure",
"authors": [
{
"first": "Zhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Fuzhen",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Qu",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhao Zhang, Fuzhen Zhuang, Meng Qu, Fen Lin, and Qing He. 2018b. Knowledge graph embedding with hierarchical relation structure. In EMNLP.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "ERNIE: Enhanced language representation with informative entities",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Knowledge Graph Embedding as language modeling, where triples are \"tokenized\" into word embeddings and the computed, sequential output states are used to predict triple correctness.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "b ) are the sequential output for the left context from forward and backward passes, (r f , r b ) are the sequential outputs from the right context from forward and backward passes, h and h t\u22121 are the current and the previous hidden states for forward and backward passes respectively. Left outputs are concatenated to form a left-looking encoding L c = concat[l f , l b ], while right outputs are concatenated to form a right-looking encoding R c = concat[r f , r b ]. The complete contextual representation C of the context is the concatenation of the left context and right context representations, C = concat[L c , R c ].",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td/><td colspan=\"3\">Barack Obama gave a speech to congress in Washington DC</td></tr><tr><td/><td/><td/><td>sentence generation</td></tr><tr><td>Lawyer</td><td colspan=\"3\">/person/politician</td></tr><tr><td/><td/><td/><td>Joint-kg-fnet-lm</td></tr><tr><td>profession</td><td colspan=\"2\">is a type</td><td/></tr><tr><td>birthplace</td><td>Barack</td><td/><td>Barack Obama gave a speech to congress</td></tr><tr><td>Hawaii</td><td>Obama</td><td>location</td><td>United States</td></tr><tr><td colspan=\"2\">lives in</td><td/><td/></tr><tr><td colspan=\"2\">Washington</td><td/><td/></tr><tr><td/><td>DC</td><td/><td/></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": ", recent deep"
},
"TABREF1": {
"content": "<table><tr><td/><td/><td colspan=\"2\">Plausibility</td></tr><tr><td/><td/><td colspan=\"2\">Prediction</td></tr><tr><td colspan=\"2\">Type Prediction</td><td>Feed</td><td/></tr><tr><td/><td/><td>Forward</td><td/></tr><tr><td>Feed Forward</td><td/><td>Architecture</td><td>sigmoid</td></tr><tr><td>Archicture</td><td/><td/><td/></tr><tr><td/><td/><td>Output from</td><td/></tr><tr><td/><td colspan=\"2\">last time Step</td><td/></tr><tr><td/><td>Embed Features</td><td/><td/></tr><tr><td>Mention representation</td><td>Context Representatoin</td><td>Attention Layer</td><td/></tr><tr><td>Average</td><td/><td/><td/></tr><tr><td>Mention</td><td/><td/><td/></tr><tr><td>Encoder</td><td/><td/><td/></tr><tr><td>Pre-trained</td><td/><td/><td/></tr><tr><td>Word</td><td/><td/><td/></tr><tr><td>Embeddings</td><td/><td/><td/></tr><tr><td>truncated</td><td>Left Context</td><td colspan=\"2\">Mention</td><td>Right Context</td></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": ""
},
"TABREF3": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "Comparison of previous approaches with proposed method on triple classification task."
},
"TABREF5": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "The performance of the proposed fine grain entity architecture to previous approaches on FIGER."
},
"TABREF6": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": ", consisting of 89 different entity types. FIGER consists of 113 entity types, occuring in sentences from 780k Wikipedia articles and 434 news reports. We evaluate the joint KGE and Entity Typing model on WikiAuto and WikiMan, both introduced by Xin et al. (2018a). WikiAuto is curated by distant supervision, with Freebase entities and types and sentence descriptions from Wikipedia articles. WikiMan is a manually curated dataset from Wikipedia articles with Freebase entities."
},
"TABREF9": {
"content": "<table><tr><td/><td colspan=\"2\">Plausibility</td></tr><tr><td/><td colspan=\"2\">Prediction</td></tr><tr><td/><td colspan=\"2\">sigmoid</td></tr><tr><td/><td/><td>Feed</td></tr><tr><td/><td/><td>Forward</td></tr><tr><td/><td/><td>Architecture</td></tr><tr><td>Softmax</td><td>Final backward</td><td/></tr><tr><td/><td>cell state</td><td/></tr><tr><td/><td colspan=\"2\">Concat</td></tr><tr><td>BiLSTM</td><td>Initial</td><td>Final</td></tr><tr><td>Block</td><td>State</td><td>forward</td></tr><tr><td/><td/><td>cell state</td></tr><tr><td>Initial</td><td/><td/></tr><tr><td>State</td><td/><td/></tr><tr><td colspan=\"3\">austen's british tv debut was on the irreverent cult itv puppet show spitting_image (1987-90)</td></tr><tr><td/><td>Input Sentence</td><td/></tr><tr><td colspan=\"2\">(Don Austen acted in spitting image)</td><td/></tr><tr><td/><td>Triplet Input</td><td/></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "We show the changes in performance we observe when training joint fine-grain entity type prediction and triple classification models (bottom portion) vs. single-objective models (top portion). Joint training can lead to improvements on both KGE and FNER."
},
"TABREF11": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": ""
}
}
}
}