ACL-OCL / Base_JSON /prefixK /json /knlp /2020.knlp-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:09:14.354523Z"
},
"title": "Social Media Medical Concept Normalization using RoBERTa in Ontology Enriched Text Similarity Framework",
"authors": [
{
"first": "Katikapalli",
"middle": [],
"last": "Subramanyam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NIT Trichy",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Sivanesan",
"middle": [],
"last": "Sangeetha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NIT Trichy",
"location": {
"country": "India"
}
},
"email": "sangeetha@nitt.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Pattisapu et al. (2020) formulate medical concept normalization (MCN) as text similarity problem and propose a model based on RoBERTa and graph embedding based target concept vectors. However, graph embedding techniques ignore valuable information available in the ontology like concept description and synonyms. In this work, we enhance the model of Pattisapu et al. (2020) with two novel changes. First, we use retrofitted target concept vectors instead of graph embedding based vectors. It is the first work to leverage both concept description and synonyms to represent concepts in the form of retrofitted target concept vectors in text similarity framework based social media MCN. Second, we generate both concept and concept mention vectors with same size which eliminates the need of dense layers to project concept mention vectors into the target concept embedding space. Our model outperforms existing methods with improvements up to 3.75% on two standard datasets. Further when trained only on ontology synonyms, our model outperforms existing methods with improvements up to 14.61%. We attribute these improvements to the two novel changes introduced.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Pattisapu et al. (2020) formulate medical concept normalization (MCN) as text similarity problem and propose a model based on RoBERTa and graph embedding based target concept vectors. However, graph embedding techniques ignore valuable information available in the ontology like concept description and synonyms. In this work, we enhance the model of Pattisapu et al. (2020) with two novel changes. First, we use retrofitted target concept vectors instead of graph embedding based vectors. It is the first work to leverage both concept description and synonyms to represent concepts in the form of retrofitted target concept vectors in text similarity framework based social media MCN. Second, we generate both concept and concept mention vectors with same size which eliminates the need of dense layers to project concept mention vectors into the target concept embedding space. Our model outperforms existing methods with improvements up to 3.75% on two standard datasets. Further when trained only on ontology synonyms, our model outperforms existing methods with improvements up to 14.61%. We attribute these improvements to the two novel changes introduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the rise of internet and easy accessibility, social media has become primary choice to share information. Social media includes generic platforms like twitter, facebook and health related platforms like AskAPatient.com and Patient.info. Most of the common public express their health related issues in a descriptive way using informal language. For example, a person suffering from diarrhoea expresses it as \"bathroom with runs\". Some of the colloquial health mentions along with standard concepts is given in Table 1 . However all the knowledge in clinical ontologies is available in standard medical terms. Due to this variation in the style of languages used, it is necessary to map health related mentions expressed in colloquial language to corresponding concepts in standard clinical ontology. This mapping of colloquial mentions to standard concepts is referred to as medical concept normalization (MCN) and is useful in applications like identification of adverse drug reactions, clinical paraphrasing, question answering and public health monitoring (Lee et al., 2017; Pattisapu et al., 2020) . However, normalizing user-generated health related mentions is difficult due to the colloquial language and noisy nature.",
"cite_spans": [
{
"start": 1064,
"end": 1082,
"text": "(Lee et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 1083,
"end": 1106,
"text": "Pattisapu et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 515,
"end": 522,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Research in medical concept normalization in social media text started with phrase-based machine translation model of Limsopatham and Collier (2015) . Previously, most of the existing work approach medical concept normalization in social media text as supervised multi-class text classification (Limsopatham and Collier, 2016; Tutubalina et al., 2018; Miftahutdinov and Tutubalina, 2019; Kalyan and Sangeetha, 2020) . In this approach, initially concept mention representation vector is learned using any of the deep learning models and then it is given to fully connected softmax layer to get the predicted concept. Some of the models are based on shallow neural networks like CNN or RNN with word embeddings as input (Limsopatham and Collier, 2016; Tutubalina et al., 2018; (Donnelly, 2006) .",
"cite_spans": [
{
"start": 118,
"end": 148,
"text": "Limsopatham and Collier (2015)",
"ref_id": "BIBREF8"
},
{
"start": 295,
"end": 326,
"text": "(Limsopatham and Collier, 2016;",
"ref_id": "BIBREF9"
},
{
"start": 327,
"end": 351,
"text": "Tutubalina et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 352,
"end": 387,
"text": "Miftahutdinov and Tutubalina, 2019;",
"ref_id": "BIBREF11"
},
{
"start": 388,
"end": 415,
"text": "Kalyan and Sangeetha, 2020)",
"ref_id": "BIBREF16"
},
{
"start": 719,
"end": 750,
"text": "(Limsopatham and Collier, 2016;",
"ref_id": "BIBREF9"
},
{
"start": 751,
"end": 775,
"text": "Tutubalina et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 776,
"end": 792,
"text": "(Donnelly, 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "the models are based on BERT (Miftahutdinov and Tutubalina, 2019; Kalyan and Sangeetha, 2020) . For example, Limsopatham and Collier (2016) proposed models based on CNN and RNN with outof-domain word embeddings as input, Tutubalina et al. (2018) experimented with attention based RNN model on the top of in-domain word embeddings and Subramanyam and Sivanesan (2020) experimented with bidirectional RNNs and ELMo embeddings. Miftahutdinov and Tutubalina (2019) experimented with BERT and tf-idf based semantic features while Kalyan and Sangeetha (2020) proposed model based on BERT (Devlin et al., 2019) and Highway Networks (Srivastava et al., 2015) . The main drawbacks in classification based MCN systems are a) completely ignoring target concept information by representing target concepts as meaningless one hot vectors. b) with the addition of new concepts every year, the number of concepts in clinical knowledge base is increasing. To accommodate new concepts, these models have to be re-trained from scratch which is timetaking and computationally expensive process (Pattisapu et al., 2020) . To overcome the drawbacks in multi-class classification framework to normalize medical concepts, Pattisapu et al. (2020) formulate MCN as a text similarity problem and propose a model based on RoBERTa (Liu et al., 2019) and graph embedding based target concept vectors. Initially, all the target concept vectors are generated using graph embedding techniques. Then they finetune a RoBERTa based model which learns concept mention vector and then projects it into target concepts embedding space using two dense fully connected layers. Finally, the closest target concept to the concept mention in the embedding space is chosen.",
"cite_spans": [
{
"start": 29,
"end": 65,
"text": "(Miftahutdinov and Tutubalina, 2019;",
"ref_id": "BIBREF11"
},
{
"start": 66,
"end": 93,
"text": "Kalyan and Sangeetha, 2020)",
"ref_id": "BIBREF16"
},
{
"start": 109,
"end": 139,
"text": "Limsopatham and Collier (2016)",
"ref_id": "BIBREF9"
},
{
"start": 221,
"end": 245,
"text": "Tutubalina et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 425,
"end": 460,
"text": "Miftahutdinov and Tutubalina (2019)",
"ref_id": "BIBREF11"
},
{
"start": 525,
"end": 552,
"text": "Kalyan and Sangeetha (2020)",
"ref_id": "BIBREF16"
},
{
"start": 582,
"end": 603,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 625,
"end": 650,
"text": "(Srivastava et al., 2015)",
"ref_id": null
},
{
"start": 1075,
"end": 1099,
"text": "(Pattisapu et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 1199,
"end": 1222,
"text": "Pattisapu et al. (2020)",
"ref_id": "BIBREF13"
},
{
"start": 1303,
"end": 1321,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main drawbacks in the model of Pattisapu et al. (2020) are\u2022 Graph embedding techniques leverage only the network structure and completely ignore other valuable information associated with concepts like concept description and synonyms. Moreover it is time and resource consuming process to generate target concept vectors using graph embedding techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 As vectors of concept mentions and concepts are generated with different sizes, it is necessary to project vectors of concept mentions into the embedding space of target concept using dense layers to find the nearest target concept to the given concept mention. As parameters of these dense layers are randomly initialized, good number of training instances are required to learn these dense layers parameters. With limited number of training instances, these parameters are not learned well which limits the performance of model as illustrated in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Like Pattisapu et al. 2020, we approach MCN as text similarity problem. The contribution of this paper is the two novel changes we introduce in the original model of Pattisapu et al. (2020) to overcome the drawbacks and further improve the performance.",
"cite_spans": [
{
"start": 166,
"end": 189,
"text": "Pattisapu et al. (2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 First, we use retrofitted embeddings to represent concepts. Each concept has description and set of synonyms. We encode concept descriptions using SRoBERTa (Reimers and Gurevych, 2019) and then enhance the generated concept embeddings with the injection of synonym relationship knowledge using retrofitting algorithm (Faruqui et al., 2015) and concept synonyms. Moreover, it is easy and fast to compute retrofitted embeddings. It is the first work to leverage both concept description and synonyms to represent concepts in the form of retrofitted target concept vectors in text similarity framework based social media MCN.",
"cite_spans": [
{
"start": 158,
"end": 186,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF14"
},
{
"start": 319,
"end": 341,
"text": "(Faruqui et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Second, we generate concept vectors and concept mention vectors with same size which eliminates the need of dense layers for projecting concept mentions vectors into target concept embedding space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Following Pattisapu et al. 2020, we conduct experiments on two publicly available MCN datasets and achieve improvements up to 3.75%. Further when trained only on mapping lexicon synonyms, our model outperforms existing methods with improvements up to 14.61%. We attribute these improvements to the two novel changes introduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Initially, all the target concept vectors are generated using SRoBERTa (Reimers and Gurevych, 2019) and retrofitting algorithm (Faruqui et al., 2015 the target concept which is closest (based on cosine similarity) to the concept mention in the embedding space is chosen. We refer to our model as \"Ontology Enriched Text Similarity Framework based RoBERTa (OETSR)\" . Each concept has a description and set of synonyms as shown in Table 2 . We generate concept vectors in two phases. First, we encode concept descriptions using SRoBERTa . SRoBERTa is a state-of-the-art sentence embedding model which maps concept descriptions to vectors such that related concepts are closer in embedding space. Second, we enchance the quality of target concept vectors with the addition of synonym relationship knowledge using retrofitting algorithm and concept synonyms. p = Retrof it(SRoBERT a(concept), synonyms) (1)",
"cite_spans": [
{
"start": 71,
"end": 99,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF14"
},
{
"start": 127,
"end": 148,
"text": "(Faruqui et al., 2015",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 429,
"end": 436,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "Learning concept mention representation is a key step in medical concept normalization (Limsopatham and Collier, 2016; Subramanyam and Sivanesan, 2020; Pattisapu et al., 2020) . Like Pattisapu et al. (2020), we use RoBERTa to learn concept mention representations. RoBERTa is a variant of BERT model trained on 160GB text data with better training strategies.",
"cite_spans": [
{
"start": 152,
"end": 175,
"text": "Pattisapu et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q = RoBERT a(mention)",
"eq_num": "(2)"
}
],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "We train the model using AdamW optimizer which minimizes cosine embedding loss (L) between concept mention vector, q \u2208 R H and target concept vector, p \u2208 R H . Here, H is the hidden vector size in RoBERTa. During training, we freeze the target concept vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = 1 \u2212 CosineSimilarity(p, q)",
"eq_num": "(3)"
}
],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "During inference, we encode concept mention using our fine-tuned RoBERTa model. The concept mention is mapped to the closest target con-cept (based on cosine similarity) in the embedding space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "3 Experimental Details 3.1 Datasets CADEC : CSIRO Adverse Drug Event Corpus (CADEC) dataset consists of 6754 colloquial health related mentions gathered from askapatient.com and 1029 unique SNOMED-CT codes (Karimi et al., 2015) . The domain experts manually identified all the health related mentions like 'terrible pain in shoulders' and mapped them to medical codes in SNOMED-CT vocabulary. We evaluate our model on the five fold dataset 1 created from these annotations. SNOMED-CT Synonyms: SNOMED-CT is one of the commonly used medical lexicons which includes around 0.35M concepts. Each medical concept has unique id (code), concept description (fully specified name) and set of synonyms. Each synonym can be treated as a health mention. To show the performance of our model in the absence of manually annotated instances, we train our model on the dataset created from these synonyms and then evaluate on CADEC and PsyTAR datasets. All the results are reported in Table 4 .",
"cite_spans": [
{
"start": 206,
"end": 227,
"text": "(Karimi et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 970,
"end": 977,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "As concept mentions are noisy in nature, we lowercase the text and remove unnecessary special characters and non-ASCII characters. Further, we",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.2"
},
{
"text": "Acc@1 Acc@3 Acc@5 Acc@10 Acc@1 Acc@3 Acc@5 Acc@10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CADEC PsyTAR",
"sec_num": null
},
{
"text": "Existing Methods (Tutubalina et al., 2018) normalize the words with consecutive repeating characters (e.g., feeeel to feel) and replace all the medical acronym words with corresponding full forms. To generate target concept vectors using SRoBERTa, we use sentence-transfomers 3 python library. We use SRoBERTa model trained using NLI (Bowman et al., 2015) + Multi NLI (Williams et al., 2018) datasets followed by further training on STSb (Cer et al., 2017) dataset. We run retrofitting algorithm for ten iterations as suggested by the authors. There is no official validation set for CADEC and PsyTAR datasets. So, we find optimal hyperparameter values through random search over 10% of training instances as validation set like Pattisapu et al. (2020) . We use PyTorch deep learning framework (Paszke et al., 2019) and Transformers library (Wolf et al., 2019) to implement our models.",
"cite_spans": [
{
"start": 17,
"end": 42,
"text": "(Tutubalina et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 334,
"end": 355,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 368,
"end": 391,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 438,
"end": 456,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 729,
"end": 752,
"text": "Pattisapu et al. (2020)",
"ref_id": "BIBREF13"
},
{
"start": 794,
"end": 815,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 841,
"end": 860,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CADEC PsyTAR",
"sec_num": null
},
{
"text": "We choose top-k accuracy as evaluation metric following the existing work (Miftahutdinov and Tutubalina, 2019; Kalyan and Sangeetha, 2020; Pattisapu et al., 2020) in medical concept normalization in social media text. Acc@k is 1 if top k predicted concepts include the ground truth concept otherwise 0. We evaluate our model using Acc@1 and Acc@3. As CADEC and PsyTAR datasets are fivefold, reported accuracy is the average of accuracy obtained across the folds.",
"cite_spans": [
{
"start": 74,
"end": 110,
"text": "(Miftahutdinov and Tutubalina, 2019;",
"ref_id": "BIBREF11"
},
{
"start": 111,
"end": 138,
"text": "Kalyan and Sangeetha, 2020;",
"ref_id": "BIBREF16"
},
{
"start": 139,
"end": 162,
"text": "Pattisapu et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.3"
},
{
"text": "Following (Pattisapu et al., 2020) , we conduct experiments on CADEC and PsyTAR datasets. The experimental results are reported in Table 3 . As mentioned in Table 3 , our model achieves 86.16% and 85.20% on CADEC and PsyTAR datasets respectively. The current state-of-the-art model Pattisapu et al. (2020) achieves 83.18% and 82.42% on CADEC and PsyTAR datasets respectively. Our model outperforms existing methods with improvements of a) 2.98% and 2.78% when trained using training instances + SNOMED-CT synonyms and b) 3.75% and 3.34% when trained using training instances + UMLS synonyms. As the number of labeled instances generated from UMLS synonyms is more compared to the labeled generated from SNOMED-CT synonyms, more improvements are achieved when the model is trained using training instances + UMLS synonyms.",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "(Pattisapu et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 157,
"end": 164,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Further, we would like to check how well our proposed model performs when there is no human annotated instances in the training set. For this, we train our model using only labeled instances generated from mapping lexicon synonyms and evaluate on CADEC and PsyTAR datasets. As mentioned in Table 4 , our model achieves 69.47% and 65.31% on CADEC and PsyTAR datasets. The current state-of-the-art model Pattisapu et al. (2020) achieves 64.8% and 58.4% on CADEC and Psy-TAR datasets respectively. Our model outperforms existing methods with improvements of a) 4.67% and 6.91% when trained using SNOMED-CT synonyms and b) 5.66% and 14.61% when trained using UMLS synonyms. We attribute these improvements to the novel changes introduced by us in the text similarity framework based model of Pattisapu et al. (2020) .",
"cite_spans": [
{
"start": 402,
"end": 425,
"text": "Pattisapu et al. (2020)",
"ref_id": "BIBREF13"
},
{
"start": 788,
"end": 811,
"text": "Pattisapu et al. (2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 290,
"end": 297,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Acc@1 Acc@3 Acc@5 Acc@10 Acc@1 Acc@3 Acc@5 Acc@10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CADEC PsyTAR",
"sec_num": null
},
{
"text": "Existing Methods (Pattisapu et al., 2020) 5 Analysis and Discussion",
"cite_spans": [
{
"start": 17,
"end": 41,
"text": "(Pattisapu et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CADEC PsyTAR",
"sec_num": null
},
{
"text": "In this paper, we develop a model which learns to map user-generated concept mentions to standard concepts in clinical knowledge base. To find the reasons for wrong mappings done by our model, we manually check all the erroneous mappings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1"
},
{
"text": "Our model failed in cases where the concept mention and predicted concept are exactly the same. For example, the concept mention 'weight gain' is mapped to 'weight gain' but the ground truth concept is 'excessive weight gain'. This wrong mapping could be due to wrong annotation or interpretation of the mention depends on its context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1"
},
{
"text": "In some cases, our model assigned concept which is more specific compared to the ground truth concept. For example, the mentions 'at times felt very anxious' and 'very anxious' are mapped to 'severe anxiety' but the ground truth is 'anxiety'. Here the assigned concept is more specific and hence appropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1"
},
{
"text": "In some cases, our model assigned abstract concept rather than specific concept. For example, our model assigned the mention 'sweat more' to the concept 'sweating' rather than the ground truth concept 'excessive sweating'. Here the concept 'excessive sweating' is more specific than the concept 'sweating'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1"
},
{
"text": "Some of the erroneous mappings occurred when concept mention and the predicted concept overlap. For example, the mention 'anti-constipating' is mapped to 'constipation' but the ground truth concept is 'diarrhea'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1"
},
{
"text": "In this work, we come up with a text similarity based model to normalize colloquial health related mentions in user-generated posts. Pattisapu et al. (2020) formulate MCN as text similarity problem and propose a model based on RoBERTa and graph based target concept vectors. Our model is an enhancement of the original model of (Pattisapu et al., 2020) with two simple and novel changes which improve the performance up to 3.75%. We use retrofitted target concept vectors to represent concepts which leverage both concept description and synonyms, unlike graph embedding techniques. Moreover, it is easy and faster to compute retrofitted target concept vectors compared to graph embedding based target concept vectors. In future, we would like to explore options like distant supervision to generate additional training examples.",
"cite_spans": [
{
"start": 133,
"end": 156,
"text": "Pattisapu et al. (2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://cutt.ly/Gi6kka6 2 https://doi.org/10.5281/zenodo.3236318",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/UKPLab/sentence-transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evalu- ation (SemEval-2017), pages 1-14.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Snomed-ct: The advanced terminology and coding system for ehealth",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Donnelly",
"suffix": ""
}
],
"year": 2006,
"venue": "Studies in health technology and informatics",
"volume": "121",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Donnelly. 2006. Snomed-ct: The advanced ter- minology and coding system for ehealth. Studies in health technology and informatics, 121:279.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Retrofitting word vectors to semantic lexicons",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Sujay",
"middle": [],
"last": "Kumar Jauhar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1606--1615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1606-1615.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bertmcn: Mapping colloquial phrases to standard medical concepts using bert and highway network",
"authors": [
{
"first": "Katikapalli",
"middle": [],
"last": "Subramanyam Kalyan",
"suffix": ""
},
{
"first": "Sivanesan",
"middle": [],
"last": "Sangeetha",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katikapalli Subramanyam Kalyan and Sivanesan Sangeetha. 2020. Bertmcn: Mapping colloquial phrases to standard medical concepts using bert and highway network. Technical report, EasyChair.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Cadec: A corpus of adverse drug event annotations",
"authors": [
{
"first": "Sarvnaz",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "Alejandro",
"middle": [],
"last": "Metke-Jimenez",
"suffix": ""
},
{
"first": "Madonna",
"middle": [],
"last": "Kemp",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of biomedical informatics",
"volume": "55",
"issue": "",
"pages": "73--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedi- cal informatics, 55:73-81.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Medical concept normalization for online user-generated texts",
"authors": [
{
"first": "Kathy",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sadid",
"suffix": ""
},
{
"first": "Oladimeji",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Farri",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Healthcare Informatics (ICHI)",
"volume": "",
"issue": "",
"pages": "462--469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathy Lee, Sadid A Hasan, Oladimeji Farri, Alok Choudhary, and Ankit Agrawal. 2017. Medical con- cept normalization for online user-generated texts. In 2017 IEEE International Conference on Health- care Informatics (ICHI), pages 462-469. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adapting phrase-based machine translation to normalise medical terms in social media messages",
"authors": [
{
"first": "Nut",
"middle": [],
"last": "Limsopatham",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1675--1680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nut Limsopatham and Nigel Collier. 2015. Adapting phrase-based machine translation to normalise med- ical terms in social media messages. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1675-1680.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Normalising medical concepts in social media texts by learning semantic representation",
"authors": [
{
"first": "Nut",
"middle": [],
"last": "Limsopatham",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2016,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nut Limsopatham and Nigel Collier. 2016. Normalis- ing medical concepts in social media texts by learn- ing semantic representation. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep neural models for medical concept normalization in user-generated texts",
"authors": [
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "393--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zulfat Miftahutdinov and Elena Tutubalina. 2019. Deep neural models for medical concept normaliza- tion in user-generated texts. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics: Student Research Workshop, pages 393-399.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "8026--8037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Ad- vances in neural information processing systems, pages 8026-8037.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Medical Concept Normalization by Encoding Target Knowledge",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Pattisapu",
"suffix": ""
},
{
"first": "Sangameshwar",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Palshikar",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Machine Learning for Health NeurIPS Workshop",
"volume": "116",
"issue": "",
"pages": "246--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Pattisapu, Sangameshwar Patil, Girish Palshikar, and Vasudeva Varma. 2020. Medical Concept Nor- malization by Encoding Target Knowledge. In Proceedings of the Machine Learning for Health NeurIPS Workshop, volume 116 of Proceedings of Machine Learning Research, pages 246-259. PMLR.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Sentencebert: Sentence embeddings using siamese bertnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3973--3983",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3973-3983.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Deep contextualized medical concept normalization in social media text",
"authors": [
{
"first": "Katikapalli",
"middle": [],
"last": "Kalyan",
"suffix": ""
},
{
"first": "Sangeetha",
"middle": [],
"last": "Subramanyam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sivanesan",
"suffix": ""
}
],
"year": 2020,
"venue": "Third International Conference on Computing and Network Communications (CoCoNet'19)",
"volume": "171",
"issue": "",
"pages": "1353--1362",
"other_ids": {
"DOI": [
"10.1016/j.procs.2020.04.145"
]
},
"num": null,
"urls": [],
"raw_text": "Kalyan Katikapalli Subramanyam and Sangeetha Sivanesan. 2020. Deep contextualized medical con- cept normalization in social media text. Procedia Computer Science, 171:1353 -1362. Third Interna- tional Conference on Computing and Network Com- munications (CoCoNet'19).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Medical concept normalization in social media posts with recurrent neural networks",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
},
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Nikolenko",
"suffix": ""
},
{
"first": "Valentin",
"middle": [],
"last": "Malykh",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of biomedical informatics",
"volume": "84",
"issue": "",
"pages": "93--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Tutubalina, Zulfat Miftahutdinov, Sergey Nikolenko, and Valentin Malykh. 2018. Medical concept normalization in social media posts with recurrent neural networks. Journal of biomedical informatics, 84:93-102.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Huggingface's transformers: Stateof-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, et al. 2019. Huggingface's transformers: State- of-the-art natural language processing. ArXiv, pages arXiv-1910.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A systematic approach for developing a corpus of patient reported adverse drug events: a case study for ssri and snri medications",
"authors": [
{
"first": "Maryam",
"middle": [],
"last": "Zolnoori",
"suffix": ""
},
{
"first": "Kin",
"middle": [
"Wah"
],
"last": "Fung",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"B"
],
"last": "Patrick",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Fontelo",
"suffix": ""
},
{
"first": "Hadi",
"middle": [],
"last": "Kharrazi",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Faiola",
"suffix": ""
},
{
"first": "Yi Shuan Shirley",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christina",
"middle": [
"E"
],
"last": "Eldredge",
"suffix": ""
},
{
"first": "Jake",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Conway",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of biomedical informatics",
"volume": "90",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maryam Zolnoori, Kin Wah Fung, Timothy B Patrick, Paul Fontelo, Hadi Kharrazi, Anthony Faiola, Yi Shuan Shirley Wu, Christina E Eldredge, Jake Luo, Mike Conway, et al. 2019. A systematic ap- proach for developing a corpus of patient reported adverse drug events: a case study for ssri and snri medications. Journal of biomedical informatics, 90:103091.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "PsyTAR :Zolnoori et al. (2019) gathered psychiatric medicines related reviews from askapatient.com and created this dataset. It consists of 6556 colloquial health related mentions and 618 unique SNOMED-CT codes.Miftahutdinov and Tutubalina (2019) created five fold dataset 2 from these annotations and released it publicly.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"content": "<table/>",
"text": "",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"text": "Some of the SNOMED-CT concepts with concept-id (unique medical code), description (fully specified name) and synonyms.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"text": "Comparison of existing methods and our model. \u03a6 -model is trained using training instances + SNOMED-CT synonyms likePattisapu et al. (2020) and \u03a0 model is trained using training instances + UMLS synonyms.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF7": {
"content": "<table/>",
"text": "Comparison of existing methods and our model. Here model is trained using ontology synonyms and evaluated on the corresponding test sets. \u03a6 -model is trained using SNOMED-CT synonyms likePattisapu et al. (2020) and \u03a0 -model is trained using UMLS synonyms.",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}