ACL-OCL / Base_JSON /prefixK /json /K18 /K18-1020.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K18-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:10:45.513107Z"
},
"title": "Latent Entities Extraction: How to Extract Entities that Do Not Appear in the Text?",
"authors": [
{
"first": "Eylon",
"middle": [],
"last": "Shoshan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technion -Israel Institute of Technology",
"location": {
"settlement": "Haifa",
"country": "Israel"
}
},
"email": ""
},
{
"first": "Kira",
"middle": [],
"last": "Radinsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technion -Israel Institute of Technology",
"location": {
"settlement": "Haifa",
"country": "Israel"
}
},
"email": "kirar@cs.technion.ac.il"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Named-entity Recognition (NER) is an important task in the NLP field , and is widely used to solve many challenges. However, in many scenarios, not all of the entities are explicitly mentioned in the text. Sometimes they could be inferred from the context or from other indicative words. Consider the following sentence: \"CMA can easily hydrolyze into free acetic acid.\" Although water is not mentioned explicitly, one can infer that H2O is an entity involved in the process. In this work, we present the problem of Latent Entities Extraction (LEE). We present several methods for determining whether entities are discussed in a text, even though, potentially, they are not explicitly written. Specifically, we design a neural model that handles extraction of multiple entities jointly. We show that our model, along with multi-task learning approach and a novel task grouping algorithm, reaches high performance in identifying latent entities. Our experiments are conducted on a large biological dataset from the biochemical field. The dataset contains text descriptions of biological processes, and for each process, all of the involved entities in the process are labeled, including implicitly mentioned ones. We believe LEE is a task that will significantly improve many NER and subsequent applications and improve text understanding and inference.",
"pdf_parse": {
"paper_id": "K18-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "Named-entity Recognition (NER) is an important task in the NLP field , and is widely used to solve many challenges. However, in many scenarios, not all of the entities are explicitly mentioned in the text. Sometimes they could be inferred from the context or from other indicative words. Consider the following sentence: \"CMA can easily hydrolyze into free acetic acid.\" Although water is not mentioned explicitly, one can infer that H2O is an entity involved in the process. In this work, we present the problem of Latent Entities Extraction (LEE). We present several methods for determining whether entities are discussed in a text, even though, potentially, they are not explicitly written. Specifically, we design a neural model that handles extraction of multiple entities jointly. We show that our model, along with multi-task learning approach and a novel task grouping algorithm, reaches high performance in identifying latent entities. Our experiments are conducted on a large biological dataset from the biochemical field. The dataset contains text descriptions of biological processes, and for each process, all of the involved entities in the process are labeled, including implicitly mentioned ones. We believe LEE is a task that will significantly improve many NER and subsequent applications and improve text understanding and inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named entity recognition (NER) is an important building block in many natural-languageprocessing algorithms and applications. For example, representing texts as a knowledge graph, where nodes are extracted entities, has been proved to be effective for question answering (Berant and Clark, 2014) as well as for summarization tasks (Ganesan et al., 2010) . Other applications, such as semantic annotation (Marrero et al., 2013) require recognition of entities in the text as well. Babych and Hartley (2003) have also shown that identifying named entities correctly, has an effect both on the global syntactic and lexical structure, additionally to the local and immediate context. NER today focuses on extracting existing entities in the text. However, many texts, contain \"hidden\" entities, which are not mentioned explicitly in the text, but might be inferred from the context. For example, special verbs could help a human reader infer the discussed entity implicitly. Consider the following textual passage of a biochemical reaction:",
"cite_spans": [
{
"start": 331,
"end": 353,
"text": "(Ganesan et al., 2010)",
"ref_id": "BIBREF15"
},
{
"start": 404,
"end": 426,
"text": "(Marrero et al., 2013)",
"ref_id": "BIBREF22"
},
{
"start": 480,
"end": 505,
"text": "Babych and Hartley (2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\"At the plasma membrane, phosphatidylcholine is hydrolyzed, removing one of its acyl groups, to 1-acyl lysophosphatidylcholine by membraneassociated phospholipase b1. \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The words water or H2O are not mentioned. Nonetheless, one could easily infer that water is involved in the process, since the word hydrolyzed refers to water. Therefore, water is a latent entity in this case. Other contexts, do not involve only indicating verbs. Consider the following sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\"The conversion of Testosterone to Estradiol is catalyzed by Aromatase associated with the endoplasmic reticulum membrane.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here, Oxygen is a latent entity. Aromatase is an enzyme that belongs to the Monooxygenases family. This family is characterized by requiring Oxygen when performing catalyzation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Latent entities do not only play a prominent role in biochemical and medical fields, but are also common in other domains. For example, consider the following snippet as published in business section, New York Times magazine in January 2017:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\"The free app, which Facebook owns, is offering another vehicle to advertisers, who since late 2015 have been buying space on its original photo feed.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To an average human reader who is familiar with contemporary norms and trends, it is quite clear that Instagram app is discussed in the textual passage above. However, it is not explicitly written, thus it is practically a latent entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Identifying latent entities in texts, and gaining the ability to infer them from a context, will significantly enrich our ability to comprehend and perform inference over complex texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we formulate the novel problem of Latent-Entities Extraction (LEE). We study several deep and non-deep models for this task, that learn to extract latent entities from texts and overcome the fact that these are not mentioned explicitly. Specifically, we study a model that combines a neural recurrent network (Bi-GRUs) and multitask learning, showing that joint prediction of correlated entities could refine the performance. We present a novel algorithm for task grouping in the multi-task learning setting for LEE. The algorithm chooses which latent entities to learn together. We show this approach reaches the best performance for LEE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contribution of our works is threefold: (1) We formulate a novel task of LEE, where the goal is to extract entities which are implicitly mentioned in the text. (2) We present a large labeled biological dataset to study LEE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(3) We present several algorithms for this task. Specifically, we find that learning multiple latent entities in a multi-task learning setting, while selecting the correct entities to learn together, reaches the best results for LEE. We share our code and data with the community to enable the community to develop additional algorithms for LEE: https://github.com/ EylonSho/LatentEntitiesExtraction",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Entities Recognition Named-entity recognition (NER) aims at identifying different types of entities, such as people names, companies, locations, organizations, etc. within a given text. Such deduced information is necessary for many application, e.g. summarization tasks (Ganesan et al., 2010) , data mining (Chen et al., 2004) , and translation (Babych and Hartley, 2003) .",
"cite_spans": [
{
"start": 271,
"end": 293,
"text": "(Ganesan et al., 2010)",
"ref_id": "BIBREF15"
},
{
"start": 308,
"end": 327,
"text": "(Chen et al., 2004)",
"ref_id": "BIBREF5"
},
{
"start": 346,
"end": 372,
"text": "(Babych and Hartley, 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This problem has been widely researched. Several benchmark data sets such as CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) and OntoNotes 5.0 (Hovy et al., 2006; Pradhan et al., 2013) were published. Traditional approaches label each token in texts as part of named-entity, and achieve high performance (Ratinov and Roth, 2009; Passos et al., 2014; Chiu and Nichols, 2016) .",
"cite_spans": [
{
"start": 77,
"end": 107,
"text": "CoNLL-2003 (Tjong Kim Sang and",
"ref_id": null
},
{
"start": 108,
"end": 125,
"text": "De Meulder, 2003)",
"ref_id": "BIBREF30"
},
{
"start": 144,
"end": 163,
"text": "(Hovy et al., 2006;",
"ref_id": "BIBREF17"
},
{
"start": 164,
"end": 185,
"text": "Pradhan et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 305,
"end": 329,
"text": "(Ratinov and Roth, 2009;",
"ref_id": "BIBREF26"
},
{
"start": 330,
"end": 350,
"text": "Passos et al., 2014;",
"ref_id": "BIBREF23"
},
{
"start": 351,
"end": 374,
"text": "Chiu and Nichols, 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "However, these approaches are relying on the assumption that entities are necessarily mentioned in the text. To the best of our knowledge, the problem of latent entities extraction, where entities could potentially not be mentioned in the text at all, is yet to be researched.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Multi-Task Learning Multitask learning (Caruana, 1998 ) was extensively used across many NLP fields, including neural speech translation (Anastasopoulos and Chiang, 2018), neural machine translation (Domhan and Hieber, 2017) , and summarization tasks (Isonuma et al., 2017) . In this work we study several approaches for LEE, including multi-task learning. We observe that the vanilla approach of multi-task learning is reaching limited results in our setting (Section 6). Previous work (Liu and Pan, 2017; Zhong et al., 2016; Jeong and Jun, 2018) have suggested that multitask learning should be applied on related tasks. We present an extension to the multi-task learning setting by performing clustering to related tasks to improve performance.",
"cite_spans": [
{
"start": 39,
"end": 53,
"text": "(Caruana, 1998",
"ref_id": "BIBREF4"
},
{
"start": 199,
"end": 224,
"text": "(Domhan and Hieber, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 251,
"end": 273,
"text": "(Isonuma et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 487,
"end": 506,
"text": "(Liu and Pan, 2017;",
"ref_id": "BIBREF21"
},
{
"start": 507,
"end": 526,
"text": "Zhong et al., 2016;",
"ref_id": "BIBREF31"
},
{
"start": 527,
"end": 547,
"text": "Jeong and Jun, 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "It is quite common to have implicit entities in texts in the biomedical field. Reactome (Croft et al.) is a large publicly-available biological dataset of human biological pathways and reactions. The data consists of 9,333 biochemical reaction diagrams and their textual description. Each reaction is labeled by experts regarding its reactants and products. We consider each reactant or product of a reaction as an entity. If an entity is not mentioned in the textual description, it will be considered as a latent entity. In more than 90% of the reactions, there are 3-5 involved entities. We have performed an exploration to find latent frequency, i.e., how many times the entity was found as latent, among all of its occurrences in the dataset. We identify that 97.53% of the texts contain at least one latent entity and that 80.65% of the entities are latent at least 10% of the times. The analysis results for several entities are shown in Table 1 . We observe an interesting phenomena -several entities, such as ATP, mostly appear as a latent en- 4 One-vs-all Algorithms for LEE Given a single entity which frequently tends to be latent, we need to classify whether it is involved within a given text. We train a classifier per entity using multiple techniques. We then apply the classifier on each text passage that may discuss several latent entities, and output their prediction in a onevs-all approach. We present several models which predict whether a given entity is implicitly (or explicitly) involved in a textual paragraph. We construct a classifier per entity which detects the entity in texts, and overcomes the cases where it is latent. We devise a few simple yet relatively powerful algorithms presented in Sections 4.1 -4.5.",
"cite_spans": [],
"ref_spans": [
{
"start": 945,
"end": 953,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The Reactome Dataset",
"sec_num": "3"
},
{
"text": "To tackle the LEE problem, we try to leverage the context to infer latent entities. We transform a text to a TF-IDF vector representation (applied on bigrams). Using these vectors we train several supervised classification models. We did not observe a significant difference between the models, and present results for Support Vector Machine (SVM) model (Cortes and Vapnik, 1995) that have shown the highest performance on a validation set. The models are trained to predict whether a given entity is involved or not. As can be observed in Table 1 , most of the entities are latent enough, thus this data set is appropriate to the LEE task.",
"cite_spans": [
{
"start": 354,
"end": 379,
"text": "(Cortes and Vapnik, 1995)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 540,
"end": 548,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Bag-of-words (TF-IDF)",
"sec_num": "4.1"
},
{
"text": "One of the state-of-the-art approaches for modeling text was presenting by Arora et al. (2017) . We leverage pre-trained word embedding vectors to generate an embedding for a text which might contain implicit entities. Based on these embeddings, a supervised classifier per entity is trained as before, i.e., we create a classifier per entity to predict whether it is implicitly mentioned in the text.",
"cite_spans": [
{
"start": 75,
"end": 94,
"text": "Arora et al. (2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Document Embedding",
"sec_num": "4.2"
},
{
"text": "We study several additional methods of representing a document using several word embedding compositions (De Boom et al., 2016) . We leverage pre-trained word embedding vectors, that were trained on Pubmed data , and suggest the following composition techniques: 1We compute the element-wise maximum vector of each word from the text, denoted as v max ; 2We compute the element-wise minimum vector of word embedding, denoted as v min . (3) We compute the element-wise mean vector, denoted as v avg .",
"cite_spans": [
{
"start": 105,
"end": 127,
"text": "(De Boom et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Element-Wise Document Embedding",
"sec_num": "4.3"
},
{
"text": "We concatenate these three vectors into the final document representation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Element-Wise Document Embedding",
"sec_num": "4.3"
},
{
"text": "v = [v max ; v min ; v avg ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Element-Wise Document Embedding",
"sec_num": "4.3"
},
{
"text": "This is the feature vector which is fed as an input to the SVM classifier, built for each entity separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Element-Wise Document Embedding",
"sec_num": "4.3"
},
{
"text": "In this approach, we attempt to combine several ways of representing a document into a single representation. We concatenate the feature vectors for each document as generated in sections 4.2, 4.3. A classification model is then trained similarly to the previous sections and applied on the new representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined Document Embedding",
"sec_num": "4.4"
},
{
"text": "Instead of disregarding word order as in the previous approaches (Sections 4.2 -4.4), we leverage pre-trained word embedding vectors that were trained on Pubmed data , and learn an unsupervised deep model to produce a document embedding. We experiment with several deep models, including Bi-LSTM and Bi-GRU unit: each textual description is translated to sequence of pre-trained embeddings. That sequence is fed into a Bi-Directional Long Short Term Memory (Bi-LSTM) (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997) or Bi-GRU (Cho et al., 2014) , and based on the final cell state, we perform a binary prediction whether the given entity is implicitly mentioned or not.",
"cite_spans": [
{
"start": 467,
"end": 501,
"text": "(Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF16"
},
{
"start": 502,
"end": 529,
"text": "Schuster and Paliwal, 1997)",
"ref_id": "BIBREF28"
},
{
"start": 540,
"end": 558,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Document Embedding",
"sec_num": "4.5"
},
{
"text": "Given a predefined list of entities, we wish to classify whether one entity or more from that list, are involved in a given text passage. We train a single multi-task-learning classifier that outputs the set of latent entities relevant to the text. Intuitively, the model might capture correlation of entities that tend to be involved (or not) together, and therefore their latent behavior might be similar. For each entity which is listed in a predefined list, the model will output a probability as an estimation for its likelihood to be involved in a given text. Figure 1 illustrates the general design of our architecture: an embedding layer, a Bi-GRU components that are fed by the embedding, and ultimately a prediction layer containing as many outputs as the total number of latent entities to be extracted.",
"cite_spans": [],
"ref_spans": [
{
"start": 566,
"end": 574,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Multi-Task-Learning Algorithms for LEE",
"sec_num": "5"
},
{
"text": "Embedding The embedding layer first embeds a sequence of words into a sequence of embedding vectors of 200 dimension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task-Learning Model Architecture",
"sec_num": "5.1"
},
{
"text": "Bidirectional GRU The output vectors from the last layer are fed into a RNN unit to capture context out of the text. This unit is capable of analyzing context that is spanned over sub-sequences in texts. This is done when the RNN component is sequentially fed by the embedding vectors {v t }, and iteratively compute a hidden state vector {h t } based on the previous hidden state and the current input embedding vector, using some function f . Moreover, the output of this unit {o t } in each timestamp t, is computed based on the current hidden state using a function g. Specifically we use a GRU unit as a RNN as presented by Cho et al. (2014) . Hidden state's dimension is set to 200, with sigmoid as an ac-tivation function. Additionally, we use the bidirectional version (Schuster and Paliwal, 1997) of GRU.",
"cite_spans": [
{
"start": 629,
"end": 646,
"text": "Cho et al. (2014)",
"ref_id": "BIBREF8"
},
{
"start": 777,
"end": 805,
"text": "(Schuster and Paliwal, 1997)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task-Learning Model Architecture",
"sec_num": "5.1"
},
{
"text": "We also apply natural dropout (Srivastava et al., 2014) of 0.5 on the input embedding vectors. Another refinement is dropout that is applied on the recurrent neural network hidden layers, as Gal and Ghahramani (2016) have suggested. This recurrent dropout is set to 0.25.",
"cite_spans": [
{
"start": 30,
"end": 55,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 191,
"end": 216,
"text": "Gal and Ghahramani (2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task-Learning Model Architecture",
"sec_num": "5.1"
},
{
"text": "Classifier The outputs of the Bi-GRU unit, of the first and last cell, are considered during classification phase. The classifier unit is a fully connected layer with a sigmoid activation layer with k outputs, where k is the number of all tasks, i.e., entities being predicted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task-Learning Model Architecture",
"sec_num": "5.1"
},
{
"text": "We define a loss function to address the multi-task learning approach. Currently, we present a loss function for multi-task prediction that joins all of the entities together into a single prediction unit. Denote m as the number of training samples, and k as the number of latent entities that are intended to be extracted. We define the following loss function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss Function",
"sec_num": null
},
{
"text": "L(y,\u0177) = \u2212 1 m m i=1 k j=1 y (i) j log\u0177 (i) j + 1 \u2212 y (i) j log 1 \u2212\u0177 (i) j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss Function",
"sec_num": null
},
{
"text": "where y and\u0177 are the labeled and predicted values, respectively. Practically, we aggregate the log-losses over all of the training samples and latent entities, and then averaging to get the final loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss Function",
"sec_num": null
},
{
"text": "Note that we address all of the entities as they were related in here, since the loss is calculated based on them all with no exceptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss Function",
"sec_num": null
},
{
"text": "Training Model optimization was carried out using standard backpropagation and an Adam optimizer (Kingma and Ba, 2014). We have trained our model with 300 epochs and a batch size of 128. Backpropagation is allowed through all layers, except the embedding layer, which is set using pretrained embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss Function",
"sec_num": null
},
{
"text": "We use pretrained word embedding to represent each text passage. Note that fine-tuning as well as learning embedding from scratch are not practical due to data scarcity, hence we directly use word2vec The multi-task model architecture for latent entities extraction. Word embeddings are fed to several Bi-GRU units which are connected via a multi-task learning approach to numerous outputs, each representing a different latent entity prediction trained vectors 1 . These were trained over large corpora, the PubMed archive of the biochemical, biological and medical field by . We fit this choice to the nature of our data set, Reactome, which is consisted of biochemical reactions and biological processes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embedding Initialization",
"sec_num": null
},
{
"text": "The common approach in multi-task learning is handle all tasks altogether (Evgeniou and Pontil, 2004; Rai and Daume III, 2010) . Therefore, a possible approach could possibly suggest that all of the entities should be predicted together as a single multi-task classification process. However, this method is based on the assumption that all entities are necessarily related to one another (as presented in section 5.1).",
"cite_spans": [
{
"start": 74,
"end": 101,
"text": "(Evgeniou and Pontil, 2004;",
"ref_id": "BIBREF13"
},
{
"start": 102,
"end": 126,
"text": "Rai and Daume III, 2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "Several studies have shown that separation of tasks into disjoint groups could boost classification performance. Intuitively, multi-task learning among tasks that are mutually related reduces noise in prediction (Liu and Pan, 2017; Zhong et al., 2016; Jeong and Jun, 2018) . We present an algorithm that divides all of the tasks, i.e., all entities predictions, into task groups according to their inherent relatedness. Capturing these connections is performed using a co-occurrence matrix that we compute based on training-set information and behavior. Conceptually, latent entities that are labeled many times together in processes would be considered as related, thus grouped together in a joint multi-task classification unit.",
"cite_spans": [
{
"start": 212,
"end": 231,
"text": "(Liu and Pan, 2017;",
"ref_id": "BIBREF21"
},
{
"start": 232,
"end": 251,
"text": "Zhong et al., 2016;",
"ref_id": "BIBREF31"
},
{
"start": 252,
"end": 272,
"text": "Jeong and Jun, 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "Our tasks are divided into groups based on a cooccurrence matrix M which is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "M ij = # mutual occurrences of e i , e j # occurrences of e i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "where e i is the i-th latent entity that should be predicted. Additionally, note the elements of M are normalized. Figure 2 presents an example of such a co-occurrence matrix for 5 sampled entities.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "After generating the co-occurrence matrix, we leverage it to select task groups. We denote \u03b1 as a minimum threshold in order to group a pairwise of tasks together (0 \u2264 \u03b1 \u2264 1). Then, two prediction tasks (a pair of entities) e i and e j will be grouped together if M ij > \u03b1 or M ji > \u03b1. Later, we would like to avoid from multi-task group that contains one task only. Therefore, if any singletons remain, we attach each one of them to its most related entity's group, according to the same co-occurrence distribution. This reunion phase comes with the exception of \u03b1 /2 as a minimum threshold rather than \u03b1 as was done previously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "Clusters of tasks are computed according to \u03b1 = 0.65. This value is chosen empirically such that groups are divided fairly, in terms of size, both subjectively and objectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "This process induces a division of the tasks to T disjoint groups of tasks, where each group is consisted of k r prediction tasks (a task per latent entity), where r \u2208 {1, 2, . . . , T }. Note that each group is potentially of different size, i.e., k r is not Figure 2 : An example of co-occurrence matrix which describes the relatedness of entities to one another. Numbers in parentheses next to entities' names are an indication for their frequency in the training-set. As follows from the distribution, ATP and ADP are high correlated. AMP also tends to co-exist with ATP (not reciprocally though). Similarly, ADOHCY and ADOMET are quite related to one another.",
"cite_spans": [],
"ref_spans": [
{
"start": 260,
"end": 268,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "fixed. Ultimately, these groups are going to represent as the multi-task units of classification in our model. Figure 3 illustrates the design of our architecture along with the task-grouping layer. It contains an embedding layer, a Bi-GRU components that are fed by the embedding, and ultimately T multi-task classification units, one per task group.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 119,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "Classifier Similarly to the classifier in Section 5.1, the first and last cell of the GRU are connected to several disjoint groups of prediction tasks. These outputs represent the features for several multi-task classifier units, one such unit per a group of tasks. For the r-th (r \u2208 {1, 2, ..., T }) task group, we define a classifier unit as a fully connected layer with a sigmoid activation layer with k r outputs, where k r is the size of the r-th task group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "Loss Function As opposed to the loss function previously presented, here we would like to preserve relatedness among prediction tasks when they are actually related only. Therefore, we use task grouping feature to recognize T disjoint sets of entities as prediction task groups. For each task group, we force the preservation of entities' known correlation using a unique loss function that is designated for the classifier of that spe-cific group. Denote m as the number of training samples, and k r as the number of entities that are grouped in the r-th task-group (r \u2208 {1, 2, . . . , T }). The need for latent entities from the same task group to retain their relatedness, will be forced using the following loss function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "L(y,\u0177) = \u2212 1 m m i=1 kr j=1 y (i) j log\u0177 (i) j + 1 \u2212 y (i) j log 1 \u2212\u0177 (i) j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "where y and\u0177 are the labeled and predicted values, respectively. Whereas the concept is similar to the presented loss function in the vanilla multitask approach, now each task group classifier has a customized loss function that learns the behavior of its relevant entities it is responsible of.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "Note that the penalty for each predicted value is equal while in the same task group, whereas, between different task-groups the loss value may be different. In that way, we refine the classification per each task-group, and thus per each latent entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Grouping Model Architecture",
"sec_num": "5.2"
},
{
"text": "In this section, we evaluate the algorithms for LEE. We first show the performance of the algorithms for a single entity extraction, focusing on the ATP entity. We then present results for the general task of LEE, extracting multiple entities from a text. We then conclude this section by a few qualitative examples illustrating the feature importance considered for the LEE task over several texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "We start by exploring the performance of the different classifiers for the task of identifying a single latent entity. As a representative test-case we consider the extraction of the ATP entity. The entity is considered of high importance to many biological processes. Additionally, it has the highest frequency in the dataset, i.e., there are many data examples (biochemical reactions) where ATP is involved in. In more than 81% of its existences in reactions, it is not explicitly mentioned in the text, which practically makes it to a pure latent entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single Latent Entity Extraction",
"sec_num": "6.1"
},
{
"text": "The results of all the algorithms for the prediction of the latent entity ATP are shown in Table 2 . We emphasize that here, training, validating and testing were all performed on pure latent samples, Figure 3 : Multi-Task model architecture for latent entities extraction, based on task grouping approach. Word embeddings are fed to several Bi-GRU units which are connected via a multi-task learning approach to numerous groups of tasks, each representing a different group of related latent entities sharing a similar loss.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 98,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 201,
"end": 209,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Single Latent Entity Extraction",
"sec_num": "6.1"
},
{
"text": "i.e., texts that did contain the examined entity were filtered out. The last row stands for the multi-task approach with grouping, where ADP was selected to be in the same prediction task group along with ATP (ADP is known to be high correlated with ATP as also can be deduced from Figure 2 ). The improved empirical results in that experiment suggest that using multi-task learning for related tasks could be beneficial for the performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 282,
"end": 290,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Single Latent Entity Extraction",
"sec_num": "6.1"
},
{
"text": "In this section, we consider the full problem of LEE of extracting multiple entities jointly. The results are presented in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Multiple Latent Entities Extraction",
"sec_num": "6.2"
},
{
"text": "We measure the performance in two metrics: micro-average and macro-average. Whereas micro method is considered to be a measurement for the quality of all the predictions altogether, macro stands for the performance of predictions per each task. Note that the average is calculated over the number of latent entities to extract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Latent Entities Extraction",
"sec_num": "6.2"
},
{
"text": "Among the one-vs-all possible methods (Section 4), the most successful method, in terms of macro metric, is the bag-of-words & SVM model (section 4.1). At first sight, it could be surprising that such a simple approach outperforms more sophisticated methods, and mainly the deep-learning techniques. We speculate that this is an outcome of the data-set imbalance. That imbalance holds in the sense that different entities could occur in different frequencies in data examples. For example, there are quite many training examples of ATP and ADP (both are involved in more than 14% of the reactions), while other entities may be significantly less frequent (e.g. Oxygen, NADPH, NADP+ and more occurs in less than 3% of the reactions). Therefore, many classes of entities have very little training examples. This does not allow deep-learning models to train well, and therefore the macro score of SVM methods tends to be higher. The reason the SVM with BOW performs better than the more semantic embeddings (Section 4.2-4.4) with SVM might also be due to the low amount of training examples that cause the contribution of semantics to be limited for the LEE task in this dataset. The vanilla multi-task approach as described in Section 5.1, performs well according to microaveraging metric, but fails in terms of macro measurement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Latent Entities Extraction",
"sec_num": "6.2"
},
{
"text": "Ultimately, our proposed multi-task GRU based model with task-grouping (Section 5.2), outperforms all other baselines in both metrics: micro and macro. Thus, not only generally extracting entities with high performance, but also preserving fairness among different prediction tasks. We conclude that selecting the tasks to learn together in the a multi-task approach is critical for the LEE task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Latent Entities Extraction",
"sec_num": "6.2"
},
{
"text": "Further, we present Area Under the Curve (AUC) scores of performance per entity, for top frequent entities in the dataset in Table 3 . The results are shown for the two best performing classifiers (bag-of-words embedding with SVM classi- Table 3 : AUC scores for bag-of-words vectors & SVM baseline performance compared to the multi-task learner with task-grouping. The results are shown for top frequent entities in the data set. Statistically significant results are shown in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 3",
"ref_id": null
},
{
"start": 238,
"end": 245,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multiple Latent Entities Extraction",
"sec_num": "6.2"
},
{
"text": "fication and multi task with grouping).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Latent Entities Extraction",
"sec_num": "6.2"
},
{
"text": "It should be noted that multi-task learning approach is much more effective in the multiple latent entity extraction (Table 4) compared to the single latent entity extraction case (Table 2) . Specifically, multi-task learning approach along with task-grouping performs much better than the other baselines. Naturally, the wins are significant in terms of macro-metric, as our loss-function (as defined in Section 5.2) is aimed for macro optimization. However, we notice that the method also improves performance in terms of micro-metric. To motivate this, consider an example of a sentence with water as a latent entity. Let us assume water is not appearing many times in the corpora, but appears many times in the data with Oxygen. As water is not appearing in many sentences it would be hard to learn indicative features in a sentence to predict it. However, in many cases it is possible to infer Oxygen. The prior of having an",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 126,
"text": "(Table 4)",
"ref_id": "TABREF4"
},
{
"start": 180,
"end": 189,
"text": "(Table 2)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Multi-Task Approach Contribution in Multiple Latent Entity Extraction",
"sec_num": "6.3"
},
{
"text": "Oxygen as latent entity in the sentence can be considered as an indicative feature that also helps to predict water as a latent entity. As those entities do not appear many times in the corpus, learning the indicative features for a multi-task learner is hard. However, when only grouping relevant entities, we then overcome this issue and scores are improved. Table 2 provides results on the extraction of the ATP entity only, which is the most common latent entity in the Reactome dataset. Since there are many training examples for this entity in the corpus (most frequent latent entity), it is possible to learn indicative features even in non-multitask models, which therefore perform well. Thus, there is a small difference between multitask and nonmultitask approaches in Table 2 . On the other hand, in Table 4 we examine the performance over the top-40 frequent entities, including very frequent entities (such ATP and ADP), and less frequent (such Oxygen, NADPH, NADP+ and water) as well. This leads to the results over all entities both frequent and infrequent to be much better in multitask learning settings with task-grouping specifically.",
"cite_spans": [],
"ref_spans": [
{
"start": 361,
"end": 368,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 779,
"end": 786,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 811,
"end": 818,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Multi-Task Approach Contribution in Multiple Latent Entity Extraction",
"sec_num": "6.3"
},
{
"text": "To help understand the LEE problem, we present several examples of prominent words that contribute to the prediction of a latent entity. We leverage LIME algorithm (Ribeiro et al., 2016) to explain the multi task learning algorithm and present feature importance for ATP and NADPH in Figure 4 .",
"cite_spans": [
{
"start": 164,
"end": 186,
"text": "(Ribeiro et al., 2016)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 284,
"end": 293,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Examples",
"sec_num": "6.4"
},
{
"text": "The model inferred that words such as phosphorylation or phosphorylates are good indicators for the existence of ATP. Phosphorylation is the process through which a phosphate group, which is usually provided by ATP, is transferred from one molecule to a protein.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Examples",
"sec_num": "6.4"
},
{
"text": "To infer NADPH, the algorithm gives a high importance to the words P450 and reductase. Cytochrome P450 are proteins that use a variety of molecules as substrates in enzymatic reactions. They usually serve as oxidase enzymes in electron transfer chains. One of the common system they are involved in are microsomal P450 systems, where electrons are transferred from NADPH via cytochrome P450 reductase. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Examples",
"sec_num": "6.4"
},
{
"text": "In this paper, we presented a new task of latent entities extraction from text, which gives a new insight over the original named-entity recognition task. Specifically, we focus on how to extract an entity when it is not explicitly mentioned in the text, but rather implicitly mentioned in the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "We developed several methods to detect existence of such entities in texts, and present a large labeled dataset for exploring the LEE task, and perform an extensive evaluation of our methods. We explore one-vs-all methods with several methods to embed the text and a multi-task learning approach that attempts to predict several entities at once. We observe that learning highlyrelevant entities together when during LEE prediction substantially boosts detection performance. We present several explanations of the classification, as they are taken into account behind the scenes of the best-performing classifier for LEE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "For future work, we consider learning the LEE in an end-to-end fashion, learning to weight which tasks to group together to improve LEE. We believe the LEE task would spur additional research in the field to improve NER when entities are implicitly mentioned and help better comprehend complex texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Trained embedding is available online at: https:// github.com/cambridgeltl/BioNLP-2016",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Tied multitask learning for neural speech translation",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. CoRR, abs/1802.06655.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A simple but tough-to-beat baseline for sentence embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improving machine translation quality with automatic named entity recognition",
"authors": [
{
"first": "Bogdan",
"middle": [],
"last": "Babych",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Hartley",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 7th International EAMT workshop on MT and other Language Technology Tools, Improving MT through other Language Technology Tools: Resources and Tools for Building MT",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bogdan Babych and Anthony Hartley. 2003. Im- proving machine translation quality with automatic named entity recognition. In Proceedings of the 7th International EAMT workshop on MT and other Language Technology Tools, Improving MT through other Language Technology Tools: Resources and Tools for Building MT, pages 1-8. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Modeling Biological Processes for Reading Comprehension",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1499--1510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant and Peter Clark. 2014. Modeling Bio- logical Processes for Reading Comprehension. Pro- ceedings of the 2014 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 1499-1510.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multitask learning",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1998,
"venue": "Learning to learn",
"volume": "",
"issue": "",
"pages": "95--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1998. Multitask learning. In Learning to learn, pages 95-133. Springer.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Crime data mining: a general framework and some examples",
"authors": [
{
"first": "Hsinchun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wingyan",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"Jie"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Chau",
"suffix": ""
}
],
"year": 2004,
"venue": "computer",
"volume": "37",
"issue": "4",
"pages": "50--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hsinchun Chen, Wingyan Chung, Jennifer Jie Xu, Gang Wang, Yi Qin, and Michael Chau. 2004. Crime data mining: a general framework and some examples. computer, 37(4):50-56.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "How to Train Good Word Embeddings for Biomedical NLP",
"authors": [
{
"first": "Billy",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Gamal",
"middle": [],
"last": "Crichton",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "166--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Billy Chiu, Gamal Crichton, Anna Korhonen, and Sampo Pyysalo. 2016. How to Train Good Word Embeddings for Biomedical NLP. pages 166-174.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Named entity recognition with bidirectional lstm-cnns",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "357--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transac- tions of the Association for Computational Linguis- tics, 4:357-370.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Alar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bougares",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, \u00c7 alar G\u00fcl\u00e7ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Supportvector networks",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine learning",
"volume": "20",
"issue": "3",
"pages": "273--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine learning, 20(3):273-297.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Representation learning for very short texts using weighted word embedding aggregation",
"authors": [
{
"first": "Cedric",
"middle": [
"De"
],
"last": "Boom",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Van Canneyt",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Dhoedt",
"suffix": ""
}
],
"year": 2016,
"venue": "Pattern Recognition Letters",
"volume": "80",
"issue": "",
"pages": "150--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cedric De Boom, Steven Van Canneyt, Thomas De- meester, and Bart Dhoedt. 2016. Representation learning for very short texts using weighted word embedding aggregation. Pattern Recognition Let- ters, 80(C):150-156.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using targetside monolingual data for neural machine translation through multi-task learning",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Domhan",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hieber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1500--1505",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Domhan and Felix Hieber. 2017. Using target- side monolingual data for neural machine translation through multi-task learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1500-1505.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Regularized multi-task learning",
"authors": [
{
"first": "Theodoros",
"middle": [],
"last": "Evgeniou",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Pontil",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "109--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theodoros Evgeniou and Massimiliano Pontil. 2004. Regularized multi-task learning. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 109- 117. ACM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A theoretically grounded application of dropout in recurrent neural networks",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "1019--1027",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems, pages 1019-1027.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Opinosis: A graph-based approach to abstractive summarization of highly redundant opinions",
"authors": [
{
"first": "Kavita",
"middle": [],
"last": "Ganesan",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "340--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph-based approach to ab- stractive summarization of highly redundant opin- ions. pages 340-348.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Ontonotes: the 90% solution",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the human language technology conference of the NAACL",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the human lan- guage technology conference of the NAACL, Com- panion Volume: Short Papers, pages 57-60. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Extractive summarization using multi-task learning with document classification",
"authors": [
{
"first": "Masaru",
"middle": [],
"last": "Isonuma",
"suffix": ""
},
{
"first": "Toru",
"middle": [],
"last": "Fujino",
"suffix": ""
},
{
"first": "Junichiro",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Matsuo",
"suffix": ""
},
{
"first": "Ichiro",
"middle": [],
"last": "Sakata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2101--2110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masaru Isonuma, Toru Fujino, Junichiro Mori, Yutaka Matsuo, and Ichiro Sakata. 2017. Extractive sum- marization using multi-task learning with document classification. In Proceedings of the 2017 Confer- ence on Empirical Methods in Natural Language Processing, pages 2101-2110.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Variable selection and task grouping for multi-task learning",
"authors": [
{
"first": "Jun-",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Jeong",
"suffix": ""
},
{
"first": "Chi-Hyuck",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.04676"
]
},
"num": null,
"urls": [],
"raw_text": "Jun-Yong Jeong and Chi-Hyuck Jun. 2018. Variable selection and task grouping for multi-task learning. arXiv preprint arXiv:1802.04676.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Adaptive group sparse multi-task learning via trace lasso",
"authors": [
{
"first": "Sulin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sinno Jialin Pan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2358--2364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sulin Liu and Sinno Jialin Pan. 2017. Adaptive group sparse multi-task learning via trace lasso. In Pro- ceedings of the 26th International Joint Conference on Artificial Intelligence, pages 2358-2364. AAAI Press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Named entity recognition: fallacies, challenges and opportunities",
"authors": [
{
"first": "M\u00f3nica",
"middle": [],
"last": "Marrero",
"suffix": ""
},
{
"first": "Juli\u00e1n",
"middle": [],
"last": "Urbano",
"suffix": ""
},
{
"first": "Sonia",
"middle": [],
"last": "S\u00e1nchez-Cuadrado",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Morato",
"suffix": ""
},
{
"first": "Juan",
"middle": [
"Miguel"
],
"last": "G\u00f3mez-Berb\u00eds",
"suffix": ""
}
],
"year": 2013,
"venue": "Computer Standards & Interfaces",
"volume": "35",
"issue": "5",
"pages": "482--489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u00f3nica Marrero, Juli\u00e1n Urbano, Sonia S\u00e1nchez- Cuadrado, Jorge Morato, and Juan Miguel G\u00f3mez- Berb\u00eds. 2013. Named entity recognition: fallacies, challenges and opportunities. Computer Standards & Interfaces, 35(5):482-489.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Lexicon infused phrase embeddings for named entity resolution",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "Vineet",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"D"
],
"last": "Mc-Callum",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Passos, Vineet Kumar, and Andrew D Mc- Callum. 2014. Lexicon infused phrase embeddings for named entity resolution. In CoNLL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Towards robust linguistic analysis using ontonotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhong",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "143--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using ontonotes. In Proceed- ings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 143-152.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Infinite predictor subspace models for multitask learning",
"authors": [
{
"first": "Piyush",
"middle": [],
"last": "Rai",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "613--620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piyush Rai and Hal Daume III. 2010. Infinite predictor subspace models for multitask learning. In Proceed- ings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 613-620.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Design challenges and misconceptions in named entity recognition",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Com- putational Natural Language Learning, pages 147- 155. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Why should i trust you?: Explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explain- ing the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135-1144. ACM.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kuldip",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "The Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Introduction to the conll-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik F Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003",
"volume": "4",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142-147. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Flexible multi-task learning with latent task grouping",
"authors": [
{
"first": "Shi",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Pu",
"suffix": ""
},
{
"first": "Yu-Gang",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xiangyang",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2016,
"venue": "Neurocomputing",
"volume": "189",
"issue": "",
"pages": "179--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shi Zhong, Jian Pu, Yu-Gang Jiang, Rui Feng, and Xiangyang Xue. 2016. Flexible multi-task learn- ing with latent task grouping. Neurocomputing, 189:179-188.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Figure 1: The multi-task model architecture for latent entities extraction. Word embeddings are fed to several Bi-GRU units which are connected via a multi-task learning approach to numerous outputs, each representing a different latent entity prediction"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "-Grouping -Embedding based Bi-GRU (Section 5.2) 0.822 0.849 0.835 0.809 0.839 0.811"
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "(a) ATP Extraction Top Features (b) NADPH Extraction Top FeaturesFigure 4: An example of prominent words when inferring latent entities."
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Latent frequency of top common entities tity in the descriptions, i.e., most of the times they are not mentioned explicitly in the text.",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Entity</td><td colspan=\"2\">Bag-of-Words AUC Grouped-MTL AUC</td></tr><tr><td>ATP</td><td>0.906</td><td>0.938</td></tr><tr><td>ADP</td><td>0.910</td><td>0.965</td></tr><tr><td>H2O</td><td>0.864</td><td>0.928</td></tr><tr><td>PI</td><td>0.872</td><td>0.937</td></tr><tr><td>H+</td><td>0.924</td><td>0.889</td></tr><tr><td>O2</td><td>0.904</td><td>0.928</td></tr><tr><td>NADPH</td><td>0.917</td><td>0.998</td></tr><tr><td>NADP+</td><td>0.918</td><td>0.972</td></tr><tr><td>COA-SH</td><td>0.960</td><td>0.998</td></tr></table>",
"text": "Extraction of ATP as a latent entity. Statistically significant results are shown in bold.",
"num": null
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Results of multiple latent entities extraction of top 40 frequent entities. Left side is micro metric based, while the right side is according to macro metric. Statistically significant results are shown in bold.",
"num": null
}
}
}
}