ACL-OCL / Base_JSON /prefixS /json /starsem /2020.starsem-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:40:16.255615Z"
},
"title": "Improving Medical NLI Using Context-Aware Domain Knowledge",
"authors": [
{
"first": "Shaika",
"middle": [],
"last": "Chowdhury",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Chicago",
"location": {}
},
"email": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Luo",
"suffix": "",
"affiliation": {},
"email": "yuan.luo@northwestern.edu"
},
{
"first": "Philip",
"middle": [
"S"
],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Chicago",
"location": {}
},
"email": "psyu@uic.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Domain knowledge is important to understand both the lexical and relational associations of words in natural language text, especially for domain-specific tasks like Natural Language Inference (NLI) in the medical domain, where due to the lack of a large annotated dataset such knowledge cannot be implicitly learned during training. However, because of the linguistic idiosyncrasies of clinical texts (e.g., shorthand jargon), solely relying on domain knowledge from an external knowledge base (e.g., UMLS) can lead to wrong inference predictions as it disregards contextual information and, hence, does not return the most relevant mapping. To remedy this, we devise a knowledge adaptive approach for medical NLI that encodes the premise/hypothesis texts by leveraging supplementary external knowledge, alongside the UMLS, based on the word contexts. By incorporating refined domain knowledge at both the lexical and relational levels through a multi-source attention mechanism, it is able to align the token-level interactions between the premise and hypothesis more effectively. Comprehensive experiments and case study on the recently released MedNLI dataset are conducted to validate the effectiveness of the proposed approach. Premise:-DMII complicated by DM neuropathy-PVD s/p L CFA w/balloon angioplasty of SFA and AK [**Doctor Last Name **] artery w/ persistent non-healing ulcer at the lateral and medial malleolus, non-healing L pedalulcer-Hypertension-h/o MDR Pseudomonas and MRSA skin infections-h/o hemorrhagic pancreatitis ([**2857**])-h/o cholecystitis (still has gallbladder) Hypothesis: Patient has multiple diabetes related comorbidities.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Domain knowledge is important to understand both the lexical and relational associations of words in natural language text, especially for domain-specific tasks like Natural Language Inference (NLI) in the medical domain, where due to the lack of a large annotated dataset such knowledge cannot be implicitly learned during training. However, because of the linguistic idiosyncrasies of clinical texts (e.g., shorthand jargon), solely relying on domain knowledge from an external knowledge base (e.g., UMLS) can lead to wrong inference predictions as it disregards contextual information and, hence, does not return the most relevant mapping. To remedy this, we devise a knowledge adaptive approach for medical NLI that encodes the premise/hypothesis texts by leveraging supplementary external knowledge, alongside the UMLS, based on the word contexts. By incorporating refined domain knowledge at both the lexical and relational levels through a multi-source attention mechanism, it is able to align the token-level interactions between the premise and hypothesis more effectively. Comprehensive experiments and case study on the recently released MedNLI dataset are conducted to validate the effectiveness of the proposed approach. Premise:-DMII complicated by DM neuropathy-PVD s/p L CFA w/balloon angioplasty of SFA and AK [**Doctor Last Name **] artery w/ persistent non-healing ulcer at the lateral and medial malleolus, non-healing L pedalulcer-Hypertension-h/o MDR Pseudomonas and MRSA skin infections-h/o hemorrhagic pancreatitis ([**2857**])-h/o cholecystitis (still has gallbladder) Hypothesis: Patient has multiple diabetes related comorbidities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural Language Inference is a fundamentally important but challenging task in Natural Language Processing (NLP) as it requires understanding and reasoning over natural language texts (MacCartney and Manning, 2009) . As a result, a good performing This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http: //creativecommons.org/licenses/by/4.0/.",
"cite_spans": [
{
"start": 185,
"end": 215,
"text": "(MacCartney and Manning, 2009)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "NLI system is considered indispensable for downstream NLP applications such as question answering and automatic text summarization (Harabagiu and Hickl, 2006; Lloret et al., 2008) . Given a pair of sentences, a premise and a hypothesis \u210e, the goal of NLI is to determine whether the semantic relationship between and \u210e is among , and . The ability to understand natural language text innately requires to deal with background knowledge 1 (Long et al., 2017; Weissenborn et al., 2017) . A robust NLI model usually needs to reason over two types of background knowledge -lexical and relational (Weissenborn et al., 2017) . The former pertains to understanding the concepts expressed by the words in the text, while the latter learns the semantic relations between the different concepts. When performing NLI on open domain data, it is assumed that the background knowledge will be implicitly learned from the training corpora. Re- Figure 1 : Sample premise-hypothesis pair from MedNLI. The words in red, \"DMII\" and \"DM\" in and \"diabetes\" in \u210e are semantically similar at the lexical level. The UMLS relation \"co-occurs\" of the highlighted words in green in to \"diabetes\" in \u210e manifests the inferential signal \"comorbidities\". lease of large NLI datasets like the Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and Multi-Genre Natural Language Inference (MultiNLI) (Williams et al., 2017) corpora, with around 570,000 and 433,000 sentence pairs respectively, have made it possible to train deep neural networks, which are capable of encoding this knowledge in their parameters.",
"cite_spans": [
{
"start": 131,
"end": 158,
"text": "(Harabagiu and Hickl, 2006;",
"ref_id": "BIBREF8"
},
{
"start": 159,
"end": 179,
"text": "Lloret et al., 2008)",
"ref_id": "BIBREF17"
},
{
"start": 438,
"end": 457,
"text": "(Long et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 458,
"end": 483,
"text": "Weissenborn et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 592,
"end": 618,
"text": "(Weissenborn et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 1304,
"end": 1325,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 1380,
"end": 1403,
"text": "(Williams et al., 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 929,
"end": 937,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For specialized domains (e.g., medical), however, large NLI datasets are extremely scarce and the required implicit knowledge beyond the text surface cannot be extracted from limited data. For example, the recently released MedNLI dataset (Romanov and Shivade, 2018) , albeit being the single publicly available NLI dataset in the clinical domain, contains only around 13,000 expert annotated sentence pairs 2 . Therefore, most current literature on medical NLI (Romanov and Shivade, 2018; Jin et al., 2019) have capitalized on the prior semantic knowledge that is encoded in external resources (e.g., UMLS). Nevertheless, a limitation of the existing suite of medical ontologies such as UMLS is that they retrieve mapping (i.e., lexical/relational) irrespective of the textual context, which could mislead the inference model. This is worsened by the distinct linguistic idiosyncrasies present in clinical texts, wherein phrases are compressed with shorthand jargon (i.e., abbreviations) for physicians' convenience. Specifically, this instigates three challenges: 1) some semantically important words do not map to any matching concept in the UMLS 3 , 2) a wrong concept mapping is returned that does not reflect the word's actual meaning and 3) the noise introduced by wrong concept mapping could get carried forward when retrieving the relational mapping.",
"cite_spans": [
{
"start": 239,
"end": 266,
"text": "(Romanov and Shivade, 2018)",
"ref_id": "BIBREF25"
},
{
"start": 462,
"end": 489,
"text": "(Romanov and Shivade, 2018;",
"ref_id": "BIBREF25"
},
{
"start": 490,
"end": 507,
"text": "Jin et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address the aforementioned issues, we devise an approach for the NLI problem in the medical domain that is equipped with an adaptive encoding scheme to integrate context-relevant domain knowledge into the text representation more effectively. In particular, a mixture model is employed to adaptively leverage supplementary external resources, that provide contextual evidence to disambiguate the concept sense for each word, and thus facilitate in learning more semantically refined text embeddings, as well as, account for the missing words. Furthermore, in order to infuse important inferential clues between the 2 compared to open domain NLI datasets 3 we call such words \"missing words\" in this paper premise-hypothesis tokens, context-relevant relational embeddings elicited from knowledge graph is encoded through multi-source attention mechanism. We dub the proposed framework as Multi-Source Knowledge Adaptive Inference Network (MUSKAN).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Natural Language Inference lies at the core of many NLP problems (Harabagiu and Hickl, 2006; Rush et al., 2015; Pasunuru and Bansal, 2017) . Recently, deep learning has achieved great success in NLI. Current neural models for NLI can be categorized into two main groups of frameworks (1) sentence encoding models and (2) sentence pair interaction models, as discussed below:",
"cite_spans": [
{
"start": 65,
"end": 92,
"text": "(Harabagiu and Hickl, 2006;",
"ref_id": "BIBREF8"
},
{
"start": 93,
"end": 111,
"text": "Rush et al., 2015;",
"ref_id": "BIBREF26"
},
{
"start": 112,
"end": 138,
"text": "Pasunuru and Bansal, 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Inference",
"sec_num": "2.1"
},
{
"text": "In the sentence encoding framework, the sentence pair is modeled by encoding each sentence separately and the semantic relationship computed based on their similarity. InferSent (Conneau et al., 2017) first encodes the sentences using a recurrent model and then performs element-wise product and absolute difference to capture the relations between the sentences. A stacked BiLSTM is used in the Gated BiLSTM model proposed by (Chen et al., 2017b) , which first applies intra-sentence gated attention 4 to bring the sentences to fixed length vectors, and then relation information similar to InferSent is computed.",
"cite_spans": [
{
"start": 178,
"end": 200,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 427,
"end": 447,
"text": "(Chen et al., 2017b)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Inference",
"sec_num": "2.1"
},
{
"text": "Whereas in the case of sentence pair interaction framework, word-level interactions are captured using some sort of alignment mechanism (e.g., attention), which are then aggregated to a fixed-length vector to make the final decision. ESIM (Chen et al., 2016) first uses BiLSTM to capture sequential context and then models local inference between word pairs using attention; it then enhances them by computing relation information similar to In-ferSent/Gated BiLSTM but at the word-level, which is then aggregated to fixed length vectors using a second BiLSTM. In addition, it also incorporates syntactic parsing information with a second similar network. Match-LSTM (Wang and Jiang, 2015) first uses LSTMs to encode the sentences, then computes word-by-word matching using an attention scoring function for each time step, where the last hidden state is used to represent the sentence representation.",
"cite_spans": [
{
"start": 239,
"end": 258,
"text": "(Chen et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 667,
"end": 689,
"text": "(Wang and Jiang, 2015)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Inference",
"sec_num": "2.1"
},
{
"text": "Utilizing external knowledge has shown improvement in performance for some NLI works (Chen et al., 2017a; Wang et al., 2019; Li et al., 2019) . Knowledge from WordNet (Miller, 1995) is leveraged in work by (Chen et al., 2017a) to enhance the different components of the NLI model. (Kang et al., 2018) uses the hypernym/hyponym information from three different external linguistic resources, namely WordNet, PPDB (Ganitkevitch et al., 2013) and SICK (Marelli et al., 2014) , to generate adversarial examples which are used to augment and train the text entailment system in order to make it robust. (Wang et al., 2019) uses WordNet, ConceptNet and DBPedia (Auer et al., 2007) to incorporate knowledge graphs into textbased NLI models.",
"cite_spans": [
{
"start": 85,
"end": 105,
"text": "(Chen et al., 2017a;",
"ref_id": "BIBREF3"
},
{
"start": 106,
"end": 124,
"text": "Wang et al., 2019;",
"ref_id": "BIBREF29"
},
{
"start": 125,
"end": 141,
"text": "Li et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 167,
"end": 181,
"text": "(Miller, 1995)",
"ref_id": "BIBREF21"
},
{
"start": 206,
"end": 226,
"text": "(Chen et al., 2017a)",
"ref_id": "BIBREF3"
},
{
"start": 281,
"end": 300,
"text": "(Kang et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 412,
"end": 439,
"text": "(Ganitkevitch et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 449,
"end": 471,
"text": "(Marelli et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 598,
"end": 617,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 655,
"end": 674,
"text": "(Auer et al., 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLI and External Knowledge",
"sec_num": "2.2"
},
{
"text": "In medical NLI, external knowledge is provided as domain knowledge that exists in the form of medical ontology or knowledge base. Work by (Jin et al., 2019) incorporates relational information from UMLS into pre-trained BioELMO 5 and BioBERT (Lee et al., 2020) embeddings. (Romanov and Shivade, 2018) similarly uses domain-specific knowledge from UMLS, however, they modify the pretrained embeddings using retrofitting. They also experiment with knowledge-directed attention in ESIM and InferSent models.",
"cite_spans": [
{
"start": 138,
"end": 156,
"text": "(Jin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 242,
"end": 260,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLI and External Knowledge",
"sec_num": "2.2"
},
{
"text": "The main drawback of the aforementioned approaches is that they rely on the contextindependent domain knowledge returned by UMLS, which either returns an inaccurate mapping or no mapping and hence could possibly lead to wrong inference predictions. This work addresses these drawbacks with competitive performance on the MedNLI dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLI and External Knowledge",
"sec_num": "2.2"
},
{
"text": "We treat the task of Natural Language Inference (NLI) as a supervised classification task and state it as follows: given a premise sentence = ( 1 , ...,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": ") with length , a hypothesis sentence = ( \u210e 1 , ..., \u210e ) with length and the corresponding lexical (i.e., UMLS concept) and relational (i.e., UMLS relation triples) domain knowledge for the sentences represented as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "= ( 1 , ..., ), = ( \u210e 1 , ..., \u210e ) and = ( 1 , ..., ), = ( \u210e 1 , ..., \u210e )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "respectively, our goal is to learn a classifier \ue23c (a neural network in our case) which is able to predict the inference relation \u2208 between and by leveraging the domain knowledge, where = { , , }. Entailment means that when is true, then must be true; contradiction means when is true, then must be false; neutral means neither entailment nor contradiction. More formally,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "* = arg max \u2208 \ue23c( | , \u210e, , \u210e , , \u210e )",
"eq_num": "(1)"
}
],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "Here, and \u210e are the -th and -th concept, and and \u210e are the -th and -th relation triple of the premise and hypothesis respectively. Note that a word / \u210e in the premise/hypothesis could also be an abbreviation, which here we collectively call word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "As aforementioned, although incorporating domain knowledge from the UMLS helps to understand medical semantics in the text that go beyond basic linguistic understanding, it could also aggravate the inference process due to the missing words and the inaccurate mappings. To this end, we supplement the UMLS with other external resources in order to soft-align the context-relevant domain information to each word in the text. (Chen et al., 2016; Parikh et al., 2016) . In the encoder layer, an adaptive encoding scheme encodes each word in the premise/hypothesis sentence by integrating refined concept embeddings into the text representation, where the refinement is done by leveraging contextual evidence from the supplementary external resources. Then in the matching layer, the adaptive lexical encodings are enhanced with refined relational information codified in knowledge graphs using multi-source attention, that facilitates in semantically aligning and aggregating the interactions between the premise-hypothesis words. Finally, the classification layer composes the pair of sentences to a fixed length vector and predicts their relation. More details of each component will be presented in the next sections.",
"cite_spans": [
{
"start": 425,
"end": 444,
"text": "(Chen et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 445,
"end": 465,
"text": "Parikh et al., 2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "For accurately capturing the relevant lexical semantics of a word in its context, the adaptive encoding scheme exploits other external resources alongside the UMLS. A mixture model similar to (Yang and Mitchell, 2019) is employed to refine the initial UMLS concept embedding with a weighted sum of candidate concept vectors, where the weights are adaptively adjusted based on the relevance of the concept's supporting evidence to the word context. The refined concept embeddings are then integrated into the respective text representations to output the final encoded word representation. To be specific, given the premise = ( 1 , ...,",
"cite_spans": [
{
"start": 192,
"end": 217,
"text": "(Yang and Mitchell, 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": ") and the corresponding UMLS concepts = ( 1 , ..., ) (the same procedure is utilized for the and hypothesis pair, but for ease of presentation, we only describe the adaptive lexical encoding for the premise; subsequently, we drop the superscript p), each word is first converted to adimensional vector using a pre-trained word embedding method to yield the embedded representations = (\u0304 1 , ...,\u0304 ) and\u0304 = (\u0304 1 , ...,\u0304 ) respectively 6 . We then compute the initial relevance score, , between the -th word and its concept to get an idea of the degree to which the UMLS-retrieved lexical knowledge is useful in distilling the semantic 6 note that in the case of missing words without any concept/relation mapping, we provide a synthetic placeholder and set its embedding to zero meaning of the current word. It is computed as,",
"cite_spans": [
{
"start": 634,
"end": 635,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": "=\u0304 \u0304 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": "where is a trainable weight matrix. However, as the UMLS returns mappings without considering the context in which the word occurs, it is possible that the retrieved concept expresses rather a wrong meaning, which could mislead the inference process. For example, consider the term \"DM\" in Figure 1; \"dextromethorphan\" is returned as the matching concept by the UMLS 7 , but \"diabetes mellitus\" is actually the correct concept in that specific context. Evidently, this wrong domain knowledge can avert the model from establishing important inferential clues like the semantic similarity between \"diabetes mellitus\" in the premise and \"diabetes\" in the hypothesis, which would otherwise help it to conclude in a conclusive manner that the semantic relationship is entailment. Besides, the UMLS does not offer total coverage of concepts across the whole natural language, which means that for some medical domain-specific jargon, such as the abbreviations \"SFA\" and \"MDR\", there exists no corresponding concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": "To tackle these issues, we resort to external resources that can provide supporting evidence to val-idate the relevant domain knowledge in the right textual context. Concretely, for the -th word in , a set of candidate concepts, 1 , .. , related to it and their corresponding supporting evidences, 1 , .. , are first retrieved (discussed in Sections 3.1.1 and 3.1.2). The candidate concepts and their supporting evidences are then embedded as -dimensional vectors,\u0304 = (\u0304 1 , ...,\u0304 ) and\u0304 = (\u0304 1 , ...,\u0304 ) respectively, using the same pre-trained embedding method as discussed before. LSTM (Hochreiter and Schmidhuber, 1997 ) is a special variant of Recurrent Neural Networks (Williams and Zipser, 1989) and has shown to capture long-range dependencies and nonlinear dynamics between words. In order to model the contextual information of each word that is indicative of its semantic meaning, we use a BiL-STM that processes the premise in both forward and backward directions and produces the hidden states = { 1 , ..., }. Subsequently, for the -th word, its context vector\u0302 \u2208 \u211d 2 is computed as:",
"cite_spans": [
{
"start": 589,
"end": 622,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF9"
},
{
"start": 675,
"end": 702,
"text": "(Williams and Zipser, 1989)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= ( + \u0304 )",
"eq_num": "(3)"
}
],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": "where and are weight matrices to be learned, is the number of hidden units and indicates the sigmoid function. In order to gauge the suitability of each candidate concept, where \u2208 [1, ], as a more semantically similar concept to the word compared to the initially retrieved context-independent UMLS concept , we compute the relevance of its embedded supporting evidence to the current word context as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": "=\u0302 \u0304 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": "where is a trainable weight matrix. The initial concept embedding is, henceforth, refined with a mixture model that is formulated as a weighted sum of the candidate concept vectors, where the weights are the relevance scores. The refined lexical knowledge vector, \u2208 \u211d , is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= \u0304 + \u2211 =1 \u0304",
"eq_num": "(5)"
}
],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": "Here, + \u2211 =1 = 1 to ensure that the weights are adaptively adjusted according to the concepts' relevance to the word context, and apparently the contribution from the most relevant concept will be properly emphasized with a higher relevance score. In the case of missing words which have no corresponding UMLS concepts and hence make the first term in equation 5 zero, the candidate concept vectors retrieved from external resource will compensate for that through the second term. Finally, the refined knowledge vector is integrated into its original contextual representation to get the adaptive lexical embedding:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= +",
"eq_num": "(6)"
}
],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": "We consider as the final representation of the -th word that results in the encoded premise = ( 1 , ..., ) (similarly for the encoded hypothesis = ( \u210e 1 , ..., \u210e )), which are passed as inputs into the next component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Lexical Encoding",
"sec_num": "3.1"
},
{
"text": "To select candidate concepts for each concept, we measure the relevance between the respective contextual evidence (collected as described in Section 3.1.2) by performing dot product between their embeddings. For each abbreviation, its possible expansions in the Abbreviation Sense Inventory dataset are considered as the candidate concepts. While for each word (non-abbreviation), the candidates are selected from the total concept space (\u223c5300 medical concepts). We set to 5 based on hyperparameter analysis on the validation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Concepts",
"sec_num": "3.1.1"
},
{
"text": "The contextual evidence for each word/abbreviation in the medical text is collected as snippet of clinical note from two different external resources respectively. For each word (non-abbreviation) in the text, we leverage the clinical notes in the MIMIC-III critical care dataset (Johnson et al., 2016) to extract the relevant snippet in which the word appears. While for abbreviations, we first check against the more specialized Clinical Abbreviation Sense Inventory dataset (Moon et al., 2012) . It contains 440 most frequently used abbreviations selected from 352,267 dictated clinical notes. Each abbreviation instance is annotated with its long form, the source sentence where the abbreviation appears, along with other information. The source sentence is fed as the contextual evidence for the abbreviation. If it happens that the abbreviation is not found in the specialized dataset, then we resort to the MIMIC-III critical care dataset.",
"cite_spans": [
{
"start": 280,
"end": 302,
"text": "(Johnson et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 477,
"end": 496,
"text": "(Moon et al., 2012)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Evidence",
"sec_num": "3.1.2"
},
{
"text": "In order to capture fine-grained word-level information for semantic comparisons that lead to improved local inferential decisions, our proposed model attends over the word pair interactions between the encoded premise and the encoded hypothesis at both the lexical and relational levels using multisource attention mechanism. Figure 1 depicts the motivations for introducing this scheme. At the lexical level words are aligned to model their semantic similarity (i.e., in red), while the relational alignment reveals the innate semantic relations existing between medical entities (i.e., in green). This finegrained alignment simulated by the multi-source attention is important for medical NLI as the semantic relation between the premise-hypothesis depends largely on the relations of aligned semantic units, which in turn require reasoning over a range of domain-specific knowledge phenomena.",
"cite_spans": [],
"ref_spans": [
{
"start": 327,
"end": 335,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Matching with Multi-Source Attention",
"sec_num": "3.2"
},
{
"text": "The adaptive encodings outputted from the previous component already capture the lexical semantics appropriately, so lexical alignment soft-aligns the adaptive representations of the -th word in the encoded premise and the -th word in the encoded hypothesis into an alignment matrix L \u2208 \u211d \u00d7 . It is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching with Multi-Source Attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= \u22c5 \u210e",
"eq_num": "(7)"
}
],
"section": "Matching with Multi-Source Attention",
"sec_num": "3.2"
},
{
"text": "Using these cross-sentence word attention weights, the lexical context vector, , of the -th word in the encoded premise is computed to characterize the most semantically similar parts in the encoded hypothesis and vice versa:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching with Multi-Source Attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= ( ) \u2211 =1 ( ) , = \u2211 =1 (8) = ( ) \u2211 =1 ( ) , \u210e = \u2211 =1 \u210e",
"eq_num": "(9)"
}
],
"section": "Matching with Multi-Source Attention",
"sec_num": "3.2"
},
{
"text": "As for relational alignment, first the knowledge graph for each word -summarizing its relationships with other concepts in the medical domainis retrieved from the UMLS (next sub-section). It then converts them to adaptive relational embeddings / \u210e with a graph representation technique, which are attended over the same way as the lexical alignment ( / \u210e in equations 7, 8, 9 replaced with / \u210e ), but for modeling the explicit dependency relationship between the word graph representations to produce the relational context vectors, and , for the premise and hypothesis respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching with Multi-Source Attention",
"sec_num": "3.2"
},
{
"text": "The interactive features in the lexical context vector and the relational context vector are then merged as the multi-source context vector:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching with Multi-Source Attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= 1 ([ ; ]) + 1 (10) \u210e = 2 ([ \u210e ; \u210e ]) + 2",
"eq_num": "(11)"
}
],
"section": "Matching with Multi-Source Attention",
"sec_num": "3.2"
},
{
"text": "where 1 and 2 are trainable weight matrices and [;] indicates concatenation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching with Multi-Source Attention",
"sec_num": "3.2"
},
{
"text": "Knowledge The relational information between medical concepts can provide invaluable inferential clues to enhance the interactive features between the word pairs in the sentences. In order to create the relational knowledge graph, we resort to the Semantic Network within the UMLS. We first use MetaMap to map the words/phrases of the premise-hypothesis pairs in the MedNLI dataset to their corresponding UMLS concepts. This gives us a total of \u223c 5300 unique medical concepts, which form the nodes of the knowledge graph. Two medical concepts form an edge if there exists a relationship between their respective semantic types in the Semantic Network and we get a total of \u223c 15,000,000 edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Embedding of Relational",
"sec_num": "3.2.1"
},
{
"text": "We employ graph attention (Veli\u010dkovi\u0107 et al., 2017; Guan et al., 2019) to represent the knowledge graph as low-dimensional vector(s) for each medical concept(s). In order to propagate the refined lexical knowledge into the relational embeddings, we compute a mixture of the graph embeddings between the UMLS retrieved concept and its candidate concepts, where the same and weights from adaptive lexical encoding are used. This way, the graph embedded relational knowledge will align appropriately with the context-aware medical concept. First, for each concept and its candidates, their respective one-hop graph is retrieved from the aforementioned relational knowledge graph. That is, say for the -th medical concept (similarly for its candidate concepts , where \u2208 [1, ]), its one-hop graph ( ) is represented using its relation triples as ( ) = { 1 , ... }. Here, the -th triple indicates semantic relationship of the -th concept with a neighboring concept and can be written as (\u210e , , ) , where the -th concept is the head concept in each. Note that we use the concept's preferred name for each concept, and hence represent all head and tail concepts using the previous pretrained embedding. Graph attention uses attention mechanism to learn the relative weight between two connected concepts (Wu et al., 2020) , that is used to obtain the graph vector,\u0302 , as:",
"cite_spans": [
{
"start": 26,
"end": 51,
"text": "(Veli\u010dkovi\u0107 et al., 2017;",
"ref_id": "BIBREF27"
},
{
"start": 52,
"end": 70,
"text": "Guan et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 981,
"end": 989,
"text": "(\u210e , , )",
"ref_id": null
},
{
"start": 1296,
"end": 1313,
"text": "(Wu et al., 2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Embedding of Relational",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= \u2211 =1 [\u210e ; ] (12) = (\u0302 ) \u2211 \u2032 =1\u0302 \u2032 (13) = ( 1 ) \u210e( 2 \u210e + 3 )",
"eq_num": "(14)"
}
],
"section": "Adaptive Embedding of Relational",
"sec_num": "3.2.1"
},
{
"text": "where is the degree of concept , and is a trainable relation vector for relation and is randomly initialized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Embedding of Relational",
"sec_num": "3.2.1"
},
{
"text": "The adaptive relational embedding is then computed as a mixture model using the graph vectors for the concept and its candidates, as shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Embedding of Relational",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= \u0302 + \u2211 =1 \u0302",
"eq_num": "(15)"
}
],
"section": "Adaptive Embedding of Relational",
"sec_num": "3.2.1"
},
{
"text": "For notation consistency, we instead use the notations and \u210e to denote the adaptive relational embedding of the -th/ -th concept in the premise and hypothesis respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Embedding of Relational",
"sec_num": "3.2.1"
},
{
"text": "In order to aggregate the inferential semantics at the word level to a sentence representation, we first enrich the context vectors with similarity and closeness information (Chen et al., 2016; Kumar et al., ",
"cite_spans": [
{
"start": 174,
"end": 193,
"text": "(Chen et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 194,
"end": 207,
"text": "Kumar et al.,",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2299 ; \u2299 ])",
"eq_num": "(16)"
}
],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u210e = ([ \u210e ; \u210e ; \u210e ; \u210e \u2212 \u210e ; \u210e \u2212 \u210e ; \u210e \u2299 \u210e ; \u210e \u2299 \u210e ])",
"eq_num": "(17)"
}
],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "where (.) is a standard projection layer with ReLU activation function followed by a BiLSTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "Finally, a pooling layer, comprising max and mean pooling, is used to convert the vectors into a fixed-length vector and then fed into a 2-layer multilayer perception (MLP) classifier to make the final inference prediction. The entire model is trained end-to-end, through minimizing the cross-entropy loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3"
},
{
"text": "We evaluate performance of our model on the only publicly available dataset for this task, namely MedNLI (Romanov and Shivade, 2018) . Each instance in this expert-annotated dataset is a premisehypothesis pair, along with a gold label indicating their inferential relationship. The training, development and test sets consist of 11,232, 1395 and 1422 sentence pairs respectively.",
"cite_spans": [
{
"start": 105,
"end": 132,
"text": "(Romanov and Shivade, 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "We compare our model against both sentence encoding-based (InferSent (IS) and Gated BiL-STM (GBLM)) and sentence pair interaction-based (ESIM and Match-LSTM (MLM)) baselines. Furthermore, we incorporate domain knowledge in the form of UMLS medical concepts and relation information into the best performing model from each group (i.e., InferSent and ESIM). In the case of InferSent, the knowledge features are fed during encoding into the text representation; for ESIM, we also incorporate it into the attention. We refer to these knowledge-enhanced versions of the baselines with the \"w/K\" suffix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "We use pre-trained 768-BioBERT (Lee et al., 2020) vectors to initialize all word and concept embeddings in the adaptive lexical encoding step, with update during training. The hidden states of both the BiLSTMs during encoding and inference are set to 384. An Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 0.0005 is used to optimize all the trainable weights. The mini-batch size is set to 64. Table 1 reports the accuracy of the models on the development and test sets of the MedNLI dataset. MUSKAN outperforms all the baselines by a significant margin with a test accuracy of 79.42%. Specifically, there is a 2.79% performance improvement in comparison to the best performing baseline, ESIM w/K. Although ESIM w/K exploits the semantic knowledge in UMLS, we can assert that refining this knowledge using an adaptive encoding scheme based on contextual evidence is able to alleviate the noise introduced by the domain knowledge, and hence leads to a major boost in performance.",
"cite_spans": [
{
"start": 31,
"end": 49,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 274,
"end": 295,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 417,
"end": 424,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "A general observation is that the sentence encoding baselines perform poorly compared to the counterpart sentence interaction ones. The main limitation of the encoding approaches is that they fail to capture the interactions between the premisehypothesis words, that could otherwise provide important alignment information for inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "To further ascertain the effectiveness of our model, we evaluate the contributions of key factors in our method by performing an ablation study. The ablated versions of our model are shown on the far right of Table 1 as 1 and 2 . Since our proposed method encodes the premise-hypothesis sentences by integrating the refined domain knowledge into the text representation, we wonder how the model would perform without this adaptive encoding. So in the encoding step of 1 , we concatenate the initial UMLS retrieved concept embedding to the corresponding text representation, which is then passed as input to the subsequent component. We observe that this leads to a drop in performance by 4.28% compared to the whole model. This verifies our intuition that embedding the context-relevant domain knowledge can indeed improve understanding the semantics of the text. In the case of 2 , we use just the adaptive lexical encoding to compute the attention matrix, and can see that this declines the performance by 2.35%. This shows that our proposed model works more effectively by capturing the cross-features from both adaptive lexical and adaptive relational representations at the same time using multi-source attention.",
"cite_spans": [],
"ref_spans": [
{
"start": 209,
"end": 216,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "There are two different visualizations to demonstrate our model's interpretability. First, the visualization of lexical alignment shows how adaptive lexical encoding helps to align the semantically similar words in the premise-hypothesis sentence pair. Next, the multi-source attention visualization enhances the lexical alignment by highlighting the salient words that well represent the semantic relation between them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.5"
},
{
"text": "The sub-figures in Figure 3 depict the attention heatmaps yielded by the best performing baseline, ESIM w/K, and our proposed MUSKAN. The darker shade indicates higher importance in classification. The alignment of the words \"feeling\", \"fatigued\", \"light\", and \"headed\" in to \"weakness\" in \u210e is critical in deciding if the former entails the latter. From the highlighted words in the middle sub-figure in Figure 3 for lexical alignment matrix of our proposed model, it can be seen that integrating context-aware medical concepts into the text representation is in fact able to capture this semantics. The abbreviation \"USOH\" stands for \"usual state of health\" and expresses a transition from normal to a deterioration of patient's health in this context. In the right sub-figure for multisource attention matrix, the higher attention put on the words \"onset\", \"prior\", \"to\", \"admission\", \"started\" and PCP\", and their alignment with \"new\" are able to capture this nuance. We hypothesize that this is facilitated by the semantic relation information between the medical concepts incorporated through the multi-source attention. On the other hand, from the left sub-figure, it can be seen that ESIM w/K fails to model these context-aware lexical and relational associations due to missing words (e.g., USOH) and inaccurate mappings (e.g., PCP), which result in a wrong prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 27,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 405,
"end": 413,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.5"
},
{
"text": "We perform error analysis on the result of MUSKAN which divulges open challenges and directions towards pending future research in medical NLI. Typical errors made by our approach include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.6"
},
{
"text": "Numeric values: For some premises, the text can describe clinical measurements as numeric val- ues, which make it difficult for the model to semantically relate these to the condition conveyed in the hypothesis. For example, looking at the premise in Figure 4a , we can see that the different vital signs, represented with the abbreviation \"VS\", are expressed in terms of numeric values (e.g., T 98.9, HR 73, BP 121/90). However, for the model to infer that these values indicate that the patient is \"hemodynamically stable\" is challenging. Hypothesizing, we attribute this fail to the fact that medical notes lack in covering such knowledge and, perhaps, leveraging other external resources such as the Wikipedia or the laboratory test results available in electronic health records (EHR) might help to mitigate this drawback. Ambiguity: Some instances in the dataset contain words/phrases used in everyday conversation, which could appear as vague terms with respect to medical perspective and result in misclassification. As an example,\"handfuls\" in the premise in Figure 4b is actually referring to \"more medications than directed\" in the hypothesis and is an \"entailment\". However, the ambiguity here lies in that \"overdose of Dilaudid\" (which has label \"neutral\") possibly expresses similar concept, and hence leads to a false positive.",
"cite_spans": [],
"ref_spans": [
{
"start": 251,
"end": 260,
"text": "Figure 4a",
"ref_id": "FIGREF4"
},
{
"start": 1068,
"end": 1078,
"text": "Figure 4b",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.6"
},
{
"text": "This work discloses the effectiveness of contextaware domain knowledge in medical NLI and proposes a systematic approach to infuse such knowledge using an adaptive encoding scheme. By employing a multi-source attention mechanism that is able to model both the lexical and relational semantics, it is able to mitigate the noise introduced by the abbreviation-like jargon prevalent in medical text. Through both qualitative and quantitative analysis, our proposed framework advances the limited work so far done on medical NLI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "There are several possible directions that could Premise: Today she got into an argument with her husband and felt that she \\\"wanted to sleep\\\" and therefore took \\\"handfuls\\\" of dilaudid.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "Hypothesis: She took more medication than directed be explored as future work. Firstly, it would be interesting to investigate if enriching the refined domain knowledge with explicit syntactic information (e.g., parse tree) of the premise-hypothesis is helpful. Secondly, we could extract knowledge from other relevant medical knowledge bases and incorporate deeper subgraph information (e.g., two hops). Furthermore, we could test the utility of the proposed framework on downstream NLP applications that similarly suffer from small data size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "background/external/domain knowledge are used interchangeably in this paper",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "note that this attention does not capture cross-features between corresponding words in the two sentences, so not grouped into the second group",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/Andy-jqa/bioelmo",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "based on the highest MetaMap Indexing (MMI) score",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their careful reading of our manuscript and providing helpful comments and suggestions. This work is supported in part by NSF under grants III-1763325, III-1909323, and SaTC-1930941. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Dbpedia: A nucleus for a web of open data",
"authors": [
{
"first": "S",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Bizer",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kobilarov",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Cyganiak",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ives",
"suffix": ""
}
],
"year": 2007,
"venue": "The semantic web",
"volume": "",
"issue": "",
"pages": "722--735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Auer, S., Bizer, C., Kobilarov, G., Lehmann, J., Cyga- niak, R., and Ives, Z. (2007). Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722-735. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "S",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.05326"
]
},
"num": null,
"urls": [],
"raw_text": "Bowman, S. R., Angeli, G., Potts, C., and Manning, C. D. (2015). A large annotated corpus for learn- ing natural language inference. arXiv preprint arXiv:1508.05326.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enhanced lstm for natural language inference",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.06038"
]
},
"num": null,
"urls": [],
"raw_text": "Chen, Q., Zhu, X., Ling, Z., Wei, S., Jiang, H., and Inkpen, D. (2016). Enhanced lstm for natural lan- guage inference. arXiv preprint arXiv:1609.06038.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural natural language inference models enhanced with external knowledge",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Z.-H",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.04289"
]
},
"num": null,
"urls": [],
"raw_text": "Chen, Q., Zhu, X., Ling, Z.-H., Inkpen, D., and Wei, S. (2017a). Neural natural language inference models enhanced with external knowledge. arXiv preprint arXiv:1711.04289.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Recurrent neural network-based sentence encoder with gated attention for natural language inference",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Z.-H",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.01353"
]
},
"num": null,
"urls": [],
"raw_text": "Chen, Q., Zhu, X., Ling, Z.-H., Wei, S., Jiang, H., and Inkpen, D. (2017b). Recurrent neural network-based sentence encoder with gated attention for natural lan- guage inference. arXiv preprint arXiv:1708.01353.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "A",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.02364"
]
},
"num": null,
"urls": [],
"raw_text": "Conneau, A., Kiela, D., Schwenk, H., Barrault, L., and Bordes, A. (2017). Supervised learning of universal sentence representations from natural language infer- ence data. arXiv preprint arXiv:1705.02364.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ppdb: The paraphrase database",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ganitkevitch, J., Van Durme, B., and Callison-Burch, C. (2013). Ppdb: The paraphrase database. In Pro- ceedings of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 758-764.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Story ending generation with incremental encoding and commonsense knowledge",
"authors": [
{
"first": "J",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6473--6480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guan, J., Wang, Y., and Huang, M. (2019). Story end- ing generation with incremental encoding and com- monsense knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6473-6480.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Methods for using textual entailment in open-domain question answering",
"authors": [
{
"first": "S",
"middle": [],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hickl",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "905--912",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harabagiu, S. and Hickl, A. (2006). Methods for using textual entailment in open-domain question answer- ing. In Proceedings of the 21st International Confer- ence on Computational Linguistics and the 44th an- nual meeting of the Association for Computational Linguistics, pages 905-912. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long shortterm memory",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hochreiter, S. and Schmidhuber, J. (1997). Long short- term memory. Neural computation, 9(8):1735- 1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Probing biomedical embeddings from language models",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "W",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.02181"
]
},
"num": null,
"urls": [],
"raw_text": "Jin, Q., Dhingra, B., Cohen, W. W., and Lu, X. (2019). Probing biomedical embeddings from language mod- els. arXiv preprint arXiv:1904.02181.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mimic-iii, a freely accessible critical care database",
"authors": [
{
"first": "A",
"middle": [
"E"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "T",
"middle": [
"J"
],
"last": "Pollard",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "H",
"middle": [
"L"
],
"last": "Li-Wei",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ghassemi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Moody",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Szolovits",
"suffix": ""
},
{
"first": "L",
"middle": [
"A"
],
"last": "Celi",
"suffix": ""
},
{
"first": "R",
"middle": [
"G"
],
"last": "Mark",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johnson, A. E., Pollard, T. J., Shen, L., Li-wei, H. L., Feng, M., Ghassemi, M., Moody, B., Szolovits, P., Celi, L. A., and Mark, R. G. (2016). Mimic-iii, a freely accessible critical care database. Scientific data, 3:160035.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adventure: Adversarial training for textual entailment with knowledge-guided examples",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.04680"
]
},
"num": null,
"urls": [],
"raw_text": "Kang, D., Khot, T., Sabharwal, A., and Hovy, E. (2018). Adventure: Adversarial training for textual entailment with knowledge-guided examples. arXiv preprint arXiv:1805.04680.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "D",
"middle": [
"P"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Ask me anything: Dynamic memory networks for natural language processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Irsoy",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ondruska",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Gulrajani",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "1378--1387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, A., Irsoy, O., Ondruska, P., Iyyer, M., Bradbury, J., Gulrajani, I., Zhong, V., Paulus, R., and Socher, R. (2016). Ask me anything: Dynamic memory net- works for natural language processing. In Interna- tional conference on machine learning, pages 1378- 1387.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "So",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., and Kang, J. (2020). Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Several experiments on investigating pretraining and knowledge-enhanced models for natural language inference",
"authors": [
{
"first": "T",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.12104"
]
},
"num": null,
"urls": [],
"raw_text": "Li, T., Zhu, X., Liu, Q., Chen, Q., Chen, Z., and Wei, S. (2019). Several experiments on inves- tigating pretraining and knowledge-enhanced mod- els for natural language inference. arXiv preprint arXiv:1904.12104.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A text summarization approach under the influence of textual entailment",
"authors": [
{
"first": "E",
"middle": [],
"last": "Lloret",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Ferr\u00e1ndez",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Munoz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palomar",
"suffix": ""
}
],
"year": 2008,
"venue": "NLPCS",
"volume": "",
"issue": "",
"pages": "22--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lloret, E., Ferr\u00e1ndez, O., Munoz, R., and Palomar, M. (2008). A text summarization approach under the influence of textual entailment. In NLPCS, pages 22- 31.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "World knowledge for reading comprehension: Rare entity prediction with hierarchical lstms using external descriptions",
"authors": [
{
"first": "T",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "J",
"middle": [
"C K"
],
"last": "Cheung",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Precup",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "825--834",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long, T., Bengio, E., Lowe, R., Cheung, J. C. K., and Precup, D. (2017). World knowledge for reading comprehension: Rare entity prediction with hierar- chical lstms using external descriptions. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 825-834.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Natural language inference",
"authors": [
{
"first": "B",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MacCartney, B. and Manning, C. D. (2009). Natural language inference. Citeseer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A sick cure for the evaluation of compositional distributional semantic models",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "216--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marelli, M., Menini, S., Baroni, M., Bentivogli, L., Bernardi, R., Zamparelli, R., et al. (2014). A sick cure for the evaluation of compositional distribu- tional semantic models. In LREC, pages 216-223.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, G. A. (1995). Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Clinical abbreviation sense inventory",
"authors": [
{
"first": "S",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pakhomov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Melton",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moon, S., Pakhomov, S., and Melton, G. (2012). Clini- cal abbreviation sense inventory.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A decomposable attention model for natural language inference",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Parikh",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01933"
]
},
"num": null,
"urls": [],
"raw_text": "Parikh, A. P., T\u00e4ckstr\u00f6m, O., Das, D., and Uszkor- eit, J. (2016). A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multi-task video captioning with video and entailment generation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Pasunuru",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.07489"
]
},
"num": null,
"urls": [],
"raw_text": "Pasunuru, R. and Bansal, M. (2017). Multi-task video captioning with video and entailment generation. arXiv preprint arXiv:1704.07489.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Lessons from natural language inference in the clinical domain",
"authors": [
{
"first": "A",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Shivade",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.06752"
]
},
"num": null,
"urls": [],
"raw_text": "Romanov, A. and Shivade, C. (2018). Lessons from nat- ural language inference in the clinical domain. arXiv preprint arXiv:1808.06752.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "A",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Weston",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1509.00685"
]
},
"num": null,
"urls": [],
"raw_text": "Rush, A. M., Chopra, S., and Weston, J. (2015). A neu- ral attention model for abstractive sentence summa- rization. arXiv preprint arXiv:1509.00685.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Graph attention networks",
"authors": [
{
"first": "P",
"middle": [],
"last": "Veli\u010dkovi\u0107",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Cucurull",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Casanova",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Romero",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Lio",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.10903"
]
},
"num": null,
"urls": [],
"raw_text": "Veli\u010dkovi\u0107, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., and Bengio, Y. (2017). Graph attention net- works. arXiv preprint arXiv:1710.10903.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning natural language inference with lstm",
"authors": [
{
"first": "S",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1512.08849"
]
},
"num": null,
"urls": [],
"raw_text": "Wang, S. and Jiang, J. (2015). Learning natu- ral language inference with lstm. arXiv preprint arXiv:1512.08849.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improving natural language inference using external knowledge in the science questions domain",
"authors": [
{
"first": "X",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kapanipathi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Musa",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Talamadupula",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Abdelaziz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Fokoue",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Makni",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Mattei",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7208--7215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, X., Kapanipathi, P., Musa, R., Yu, M., Tala- madupula, K., Abdelaziz, I., Chang, M., Fokoue, A., Makni, B., Mattei, N., et al. (2019). Improving natu- ral language inference using external knowledge in the science questions domain. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 7208-7215.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Dynamic integration of background knowledge in neural nlu systems",
"authors": [
{
"first": "D",
"middle": [],
"last": "Weissenborn",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ko\u010disk\u1ef3",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.02596"
]
},
"num": null,
"urls": [],
"raw_text": "Weissenborn, D., Ko\u010disk\u1ef3, T., and Dyer, C. (2017). Dy- namic integration of background knowledge in neu- ral nlu systems. arXiv preprint arXiv:1706.02596.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "A",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "S",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.05426"
]
},
"num": null,
"urls": [],
"raw_text": "Williams, A., Nangia, N., and Bowman, S. R. (2017). A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A learning algorithm for continually running fully recurrent neural networks",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zipser",
"suffix": ""
}
],
"year": 1989,
"venue": "Neural computation",
"volume": "1",
"issue": "2",
"pages": "270--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Williams, R. J. and Zipser, D. (1989). A learning algo- rithm for continually running fully recurrent neural networks. Neural computation, 1(2):270-280.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A comprehensive survey on graph neural networks",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "S",
"middle": [
"Y"
],
"last": "Philip",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Neural Networks and Learning Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., and Philip, S. Y. (2020). A comprehensive survey on graph neural networks. IEEE Transactions on Neu- ral Networks and Learning Systems.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Leveraging knowledge bases in lstms for improving machine reading",
"authors": [
{
"first": "B",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.09091"
]
},
"num": null,
"urls": [],
"raw_text": "Yang, B. and Mitchell, T. (2019). Leveraging knowl- edge bases in lstms for improving machine reading. arXiv preprint arXiv:1902.09091.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Figure 2 illustrates a high-level overview of the architecture of our proposed model, MUSKAN. It follows the encode-match-classify framework of general text-based NLI models",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Architecture of our proposed MUSKAN model. The overall framework is shown to the left in a bottom-up fashion, with the sample pair inFigure 1as the input. The Adaptive Encoding Scheme is illustrated in detail in the figure on the right, taking the premise as an example input.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "Visualizations of the attention heatmaps for the following instance from the test set of the MedNLI dataset: {p: Patient was in his USOH until one week prior to admission when he started feeling fatigued and light headed and presented to his PCP. h: Patient has new onset weakness. y: Entailment}.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "Samples from MedNLI dataset to demonstrate error analysis for (a) Numeric values and (b) Ambiguity",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"html": null,
"text": "Accuracy performance of different models on the development and test sets of MedNLI. We use 768-BioBERT embeddings in all. g/l indicates the percentage gain(+)/loss(-) compared to ESIM w/K.",
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">GBLM IS</td><td colspan=\"4\">IS w/K MLM ESIM ESIM w/K MUSKAN Ab 1</td><td>Ab 2</td></tr><tr><td>Dev</td><td>73.11</td><td colspan=\"2\">74.02 74.79</td><td>74.98 76.37 78.88</td><td>80.09</td><td>76.99 78.13</td></tr><tr><td colspan=\"2\">Dev g/l -7.31</td><td colspan=\"2\">-6.16 -5.19</td><td>-4.94 -3.18 N/A</td><td>+1.53</td><td>-2.39 -0.95</td></tr><tr><td>Test</td><td>72.15</td><td colspan=\"2\">73.82 74.14</td><td>74.03 75.19 77.26</td><td>79.42</td><td>76.02 77.55</td></tr><tr><td colspan=\"2\">Test g/l -6.61</td><td colspan=\"2\">-4.45 -4.04</td><td>-4.18 -2.68 N/A</td><td>+2.79</td><td>-1.60 +0.37</td></tr></table>"
}
}
}
}