ACL-OCL / Base_JSON /prefixB /json /bionlp /2021.bionlp-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:07:13.286147Z"
},
"title": "Scalable Few-Shot Learning of Robust Biomedical Name Representations",
"authors": [
{
"first": "Pieter",
"middle": [],
"last": "Fivez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Antwerp",
"location": {}
},
"email": "pieter.fivez@uantwerpen.be"
},
{
"first": "Simon",
"middle": [],
"last": "\u0160uster",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne",
"location": {}
},
"email": "simon.suster@unimelb.edu.au"
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Antwerp",
"location": {}
},
"email": "walter.daelemans@uantwerpen.be"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent research on robust representations of biomedical names has focused on modeling large amounts of fine-grained conceptual distinctions using complex neural encoders. In this paper, we explore the opposite paradigm: training a simple encoder architecture using only small sets of names sampled from high-level biomedical concepts. Our encoder post-processes pretrained representations of biomedical names, and is effective for various types of input representations, both domainspecific or unsupervised. We validate our proposed few-shot learning approach on multiple biomedical relatedness benchmarks, and show that it allows for continual learning, where we accumulate information from various conceptual hierarchies to consistently improve encoder performance. Given these findings, we propose our approach as a lowcost alternative for exploring the impact of conceptual distinctions on robust biomedical name representations. Our code is opensource and available at www.github.com/ clips/fewshot-biomedical-names.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent research on robust representations of biomedical names has focused on modeling large amounts of fine-grained conceptual distinctions using complex neural encoders. In this paper, we explore the opposite paradigm: training a simple encoder architecture using only small sets of names sampled from high-level biomedical concepts. Our encoder post-processes pretrained representations of biomedical names, and is effective for various types of input representations, both domainspecific or unsupervised. We validate our proposed few-shot learning approach on multiple biomedical relatedness benchmarks, and show that it allows for continual learning, where we accumulate information from various conceptual hierarchies to consistently improve encoder performance. Given these findings, we propose our approach as a lowcost alternative for exploring the impact of conceptual distinctions on robust biomedical name representations. Our code is opensource and available at www.github.com/ clips/fewshot-biomedical-names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent research in biomedical NLP has focused on learning robust representations of biomedical names. To achieve robustness, an encoder should represent the semantic similarity and relatedness between different names (e.g. by their closeness in the embedding space), while its embeddings should also remain as transferable and generally applicable as self-supervised pretrained representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prior research into robust representations has shown three distinct tendencies. Firstly, research typically focuses on encoders with complex neural architectures and a large amount of parameters. As compensation for this complexity, such models can be heavily regularized during training, e.g. by tying the output of a nested LSTM to a pooled embedding of its input representations (Phan et al., 2019) , or by integrating a finetuned BERT model with sparse lexical representations (Sung et al., 2020) . Secondly, encoders are typically trained on finegrained concepts from biomedical ontologies such as the UMLS, i.e., concepts with no child nodes in the ontological directed graph. Small synonym sets of such fine-grained concepts are readily available as training data, and often serve as evaluation data for normalization tasks to which trained encoders can be applied.",
"cite_spans": [
{
"start": 382,
"end": 401,
"text": "(Phan et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 481,
"end": 500,
"text": "(Sung et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lastly, as a result of using fine-grained concepts, vast amounts of biomedical names are needed to model the large collection of fine-grained distinctions present in ontologies. For instance, Phan et al. (2019) train their encoder on 156K disorder names. These three tendencies share an underlying assumption: complex neural encoder architectures can learn biomedical semantics by generalizing in a bottom-up fashion from large amounts of finegrained semantic distinctions, if provided with sufficient quantities of training data. However, it is not self-evident that such an approach is the most effective way to achieve generalpurpose biomedical name representations. For instance, it does not directly address what conceptual distinctions are actually relevant to improve representations for downstream NLP applications. Finding and exploiting relevant distinctions can be an empirical question, and as such require lowcost exploration of various conceptual hierarchies. Such a heuristic search is expensive in the current paradigm.",
"cite_spans": [
{
"start": 192,
"end": 210,
"text": "Phan et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we explore a scalable few-shot learning approach for robust biomedical name representations which is orthogonal to this paradigm. We investigate to what extent we can fit a simple encoder architecture using only a small selection of data, with a limited amount of concepts containing only a few samples each (i.e., few-shot learning). To this end, we don't use fine-grained concepts for training, but more general higher-level concepts which span a large range of fine-grained concepts. Table 1 gives an example of such a larger grouping of biomedical names.",
"cite_spans": [],
"ref_spans": [
{
"start": 502,
"end": 509,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper offers two main contributions. Firstly, our proposed approach offers an alternative for training biomedical name encoders with much lower computational cost, both for training and inference at test time. It is applicable to largescale hierarchies containing at least ten thousands of names and is equally effective for different types of pretrained representations when tested on various biomedical relatedness benchmarks. Secondly, we show that this approach allows for low-cost continual learning from multiple concept hierarchies, and as such can help with the accumulation of relevant domain-specific information for downstream biomedical NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach is similar to supervised postprocessing techniques of word embeddings such as retrofitting and counterfitting (Faruqui et al., 2015; Mrk\u0161i\u0107 et al., 2016) , but instead post-processes pretrained representations of biomedical names.",
"cite_spans": [
{
"start": 123,
"end": 145,
"text": "(Faruqui et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 146,
"end": 166,
"text": "Mrk\u0161i\u0107 et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "Our encoder architecture is a feedforward neural network with Rectified Linear Unit (ReLU) as nonlinear activation function. This neural network transforms a pretrained representation of a biomedical name, after which this transformation is aver-min max mean stdev ICD-10 247 40,519 3,414 8,693 SNOMED-CT 397 19,114 3,532 4,094 (+ ambiguous 1,108 23,915 4,990 5,134) aged with the pretrained representation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder architecture",
"sec_num": "2.1"
},
{
"text": "f (n) = enc(u n ) + u n 2 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder architecture",
"sec_num": "2.1"
},
{
"text": "where f (n) is the output representation for a biomedical name, u n is its pretrained input representation, and enc is the feedforward neural network which transforms the input representation. The averaging step ensures that the encoder architecture learns to update the pretrained input representation rather than create an entirely new representation. This makes our model more robust against overfitting in few-shot learning settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder architecture",
"sec_num": "2.1"
},
{
"text": "Our training objectives are based on the state-ofthe-art BNE model by Phan et al. (2019) and the DAN model by Fivez et al. (2021b) , which generalizes the BNE model to any hierarchical level of biomedical concepts. Our framework requires a set of concepts C, where each concept c \u2208 C contains a set of concept names C n . The set of biomedical names N contains the union of all those sets of concept names. We propose a simple multi-task training regime which applies two training objectives to each biomedical name n \u2208 N . We use cosine distance as distance function d for both objectives.",
"cite_spans": [
{
"start": 70,
"end": 88,
"text": "Phan et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 110,
"end": 130,
"text": "Fivez et al. (2021b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training objectives",
"sec_num": "2.2"
},
{
"text": "We enforce embedding similarity between names that are from the same concept by using a siamese triplet loss (Chechik et al., 2010) . This loss forces the encoding of a biomedical name f (n) to be closer to the encoding of a semantically similar name f (n pos ) than that of an encoded negative sample name f (n neg ), within a specified (possibly tuned) margin:",
"cite_spans": [
{
"start": 109,
"end": 131,
"text": "(Chechik et al., 2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "pos = d(f (n), f (n pos )) neg = d(f (n), f (n neg )) L sem = max(pos \u2212 neg + margin, 0)",
"eq_num": "(2)"
}
],
"section": "Semantic similarity",
"sec_num": null
},
{
"text": "To select negative names during training we apply distance-weighted negative sampling (Wu et al., 2017) over all training names, since this has been proven more effective than hard or random negative sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity",
"sec_num": null
},
{
"text": "Conceptually grounded regularization To prevent the model from overfitting on the semantic similarity objective, we regularize it by grounding the output representations to a stable and meaningful target. Simple approximations of prototypical concept representations can already be very effective as targets (Fivez et al., 2021a ). Following the model by Fivez et al. (2021b) , we use a grounding target which is applicable to any level of categorization, from fine-grained concept distinctions to higher-level groupings of names. This target is a compromise between the contextual meaningfulness and conceptual meaningfulness objectives of the BNE model. Rather than constraining a name encoding either to its pretrained name representation or to a pretrained representation of its concept, we minimize the distance to the average of both pretrained representations:",
"cite_spans": [
{
"start": 308,
"end": 328,
"text": "(Fivez et al., 2021a",
"ref_id": "BIBREF4"
},
{
"start": 355,
"end": 375,
"text": "Fivez et al. (2021b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity",
"sec_num": null
},
{
"text": "u c = 1 |C n | n\u2208Cn u n u ground = u c + u n 2 L ground = d(f (n), u ground ) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity",
"sec_num": null
},
{
"text": "where the concept representation u c is approximated by averaging each pretrained embedding representation u n from the set of names C n belonging to the concept.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity",
"sec_num": null
},
{
"text": "This constraint implies that the dimensionality of the encoder output should be the same as that of the input. However, if the input dimensionality is smaller than the desired output dimensionality, this could be solved using e.g. random projections, which work well for increasing the dimensionality of neural encoder inputs (Wieting and Kiela, 2019) .",
"cite_spans": [
{
"start": 326,
"end": 351,
"text": "(Wieting and Kiela, 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity",
"sec_num": null
},
{
"text": "Multi-task loss Our multi-task loss sums the losses of the 2 training objectives:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u03b1L sem + \u03b2L ground",
"eq_num": "(4)"
}
],
"section": "Semantic similarity",
"sec_num": null
},
{
"text": "where \u03b1 and \u03b2 are possible weights for the individual losses. Since both losses directly reflect cosine distances, they are similarly scaled and don't require weighting to work properly. In our experiments, \u03b1 = \u03b2 = 1 showed the most robust performance along all settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity",
"sec_num": null
},
{
"text": "We extract sets of high-level concepts and their constituent names from 2 large-scale hierarchies of disorder concepts, ICD-10 and SNOMED-CT. Table 2 gives an overview of our data distributions.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Training data",
"sec_num": "2.3"
},
{
"text": "We use the 2018 version of the ICD-10 coding system. 1 We select the 21 chapters as concept labels, and assign the reference name of each code in a chapter to its concept label. Table 1 gives an example of how such a grouping includes diverse semantic relations.",
"cite_spans": [
{
"start": 53,
"end": 54,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 178,
"end": 185,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "ICD-10",
"sec_num": null
},
{
"text": "We use the 2018AB release of the UMLS ontology 2 to extract a directed ontological graph of SNOMED-CT concepts. We then select the first-degree child nodes of concept C0012634, which is the parent concept for all disorders. We then remove those children which are direct parents to other selected children, since they are redundant for our purpose. This leaves us with 87 concepts, to which we assign the reference terms of all their child concepts in the ontological graph as biomedical names. To make this setup directly comparable to our ICD-10 setup, we select the 21 largest concepts. Finally, we leave out ambiguous names which belong to multiple concepts. Table 2 shows the impact on the data distribution.",
"cite_spans": [],
"ref_spans": [
{
"start": 663,
"end": 670,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "SNOMED-CT",
"sec_num": null
},
{
"text": "We experiment with 3 pretrained name representations. As a first baseline, we use 300-dimensional fastText (Bojanowski et al., 2017) word embeddings which we train on 76M sentences of preprocessed MEDLINE articles released by Hakala et al. (2016) . We use average pooling (Shen et al., 2018) to extract a 300-dimensional name representation. As a second baseline, we average the 728-dimensional context-specific token activations of a name extracted from the publicly released BioBERT model (Lee et al., 2019) .",
"cite_spans": [
{
"start": 107,
"end": 132,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 226,
"end": 246,
"text": "Hakala et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 272,
"end": 291,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 491,
"end": 509,
"text": "(Lee et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained representations",
"sec_num": "3.1"
},
{
"text": "As state-of-the-art reference, we extract 200dimensional name representations using the publicly released pretrained BNE model with skipgram word embeddings, BNE + SG w , 3 which was trained on approximately 16K synonym sets of disease ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained representations",
"sec_num": "3.1"
},
{
"text": "We randomly sample a small fixed amount of names from each concept in our training data as actual few-shot training names. We then randomly sample the same amount of names as validation data to calculate the multi-task loss as stopping criterion. This criterion is also used to finetune the size of the encoder network. Using only 1 hidden layer proved best in all settings, which leaves only the dimensionality of this layer to be tuned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training details",
"sec_num": "3.2"
},
{
"text": "Our encoder network is implemented in PyTorch (Paszke et al., 2019) . Adam optimization (Kingma and Ba, 2015) is performed on a batch size of 16, using a learning rate of 0.001 and a dropout rate of 0.5. Input strings are first tokenized using the Pattern tokenizer (Smedt and Daelemans, 2012) and then lowercased. We use a triplet margin of 0.1 for the siamese triplet loss L sem defined in Equation 2.",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 266,
"end": 293,
"text": "(Smedt and Daelemans, 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training details",
"sec_num": "3.2"
},
{
"text": "We evaluate our trained encoders on 3 biomedical benchmarks of semantic relatedness and similarity, which allow to compare similarity scores between name embeddings with human judgments of relatedness. MayoSRS (Pakhomov et al., 2011) contains multi-word name pairs of related but different fine-grained concepts. UMNSRS (Pakhomov et al., 2016) contains only single-word pairs, and makes a distinction between relatedness and similarity, which is a more narrow form of relatedness. Finally, EHR-RelB (Schulz et al., 2020 Table 3 : Spearman's rank correlation coefficient between human judgments and similarity scores of name embeddings, reported on semantic similarity (sim) and relatedness (rel) benchmarks. The highest score is denoted in bold; the second highest is underlined.",
"cite_spans": [
{
"start": 210,
"end": 233,
"text": "(Pakhomov et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 320,
"end": 343,
"text": "(Pakhomov et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 499,
"end": 519,
"text": "(Schulz et al., 2020",
"ref_id": null
}
],
"ref_spans": [
{
"start": 520,
"end": 527,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "much larger than the other benchmarks, and contains multi-word concept pairs which are chosen based on co-occurrence in electronic health records. This ensures that the evaluated concept pairs are actually relevant in function of downstream applications such as information retrieval. We average all test results over 5 different random training samples. We use cosine similarity as similarity score for all baseline representations and trained encoders. Figure 1 shows the impact of the amount of few-shot training names on performance when using fastText representations. Our model already substantially improves over the baseline with only 5 names per concept (105 in total), and maintains consistent improvement up to 15 fewshot names. This confirms that our approach is well-suited to anticipate expected improvements from training on large-scale hierarchies. Table 3 shows the results on all benchmarks for 15-shot learning. All encoders were tuned to 9,600 hidden dimensions. We include two state-ofthe-art biomedical name encoders in our comparison. Firstly, BioSyn (Sung et al., 2020) sums the weighted inner products of fine-tuned BioBERT representations and sparse TF-IDF representations into one similarity score between two names. The pre-trained model 4 for which we report results was trained on the NCBI disease benchmark (Dogan et al., 2014) for biomedical entity normalization. Secondly, we include the results of the conceptually grounded Deep Averaging Network by Fivez et al. (2021a) , which was trained on SNOMED-CT synonym sets mapped into larger ICD-10 categories.",
"cite_spans": [
{
"start": 1074,
"end": 1093,
"text": "(Sung et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 1484,
"end": 1504,
"text": "Fivez et al. (2021a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 455,
"end": 463,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 865,
"end": 872,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "The results show various trends. Firstly, almost all trained encoders improve over their input baselines for all benchmarks, regardless of the type of input representation. Secondly, the performance increase is consistent for both ICD-10 and SNOMED-CT, even as their conceptual hierarchies are substantially different. Lastly, we also look at continual learning from SNOMED-CT to ICD-10 (S \u2192 I) or vice versa (I \u2192 S), where we use the output of the first model as input representations to train the second model. This approach leads to systematic improvements for all representation types, including the state-of-the-art BNE representations. In other words, we provide tangible empirical evidence that few-shot robust representations can allow for continual specialization in biomedical semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "To better understand how our few-shot learning approach can have a visible impact on various relatedness benchmarks, Table 4 gives an example of nearest neighbor names from the training set of SNOMED-CT names for the validation mention urinary hesitancy. While the pretrained BNE model makes various topical associations, our 15shot model using the BNE representations as input has learned to cluster around the semantics of urinary tract disorders. As this already generalizes to validation mentions, we can expect the model to transfer this information to downstream applications wherever urinary tract disorders are relevant. This applies to all 21 high-level topics which were simultaneously encoded for both the ICD-10 and SNOMED-CT ontologies.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "We have proposed a novel approach for scalable few-shot learning of robust biomedical name representations, which trains a simple encoder architecture using only small subsamples of names from higher-level concepts of large-scale hierarchies. Our model works for various pretrained input embeddings, including already specialized name representations, and can accumulate information over various hierarchies to systematically improve performance on biomedical relatedness benchmarks. Future work will investigate whether such improvements trickle down properly to downstream biomedical NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "4"
},
{
"text": "https://www.cdc.gov/nchs/icd 2 https://uts.nlm.nih.gov/home.html 3 https://github.com/minhcp/BNE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/dmis-lab/BioSyn",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their feedback. This research was carried out in the framework of the Accumulate VLAIO SBO project, funded by the government agency Flanders Innovation & Entrepreneurship (VLAIO). This research also received funding from the Flemish Government (AI Research Program).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Large scale online learning of image similarity through ranking",
"authors": [
{
"first": "Gal",
"middle": [],
"last": "Chechik",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Uri",
"middle": [],
"last": "Shalit",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "11",
"issue": "",
"pages": "1109--1135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gal Chechik, Varun Sharma, Uri Shalit, and Samy Ben- gio. 2010. Large scale online learning of image sim- ilarity through ranking. Journal of Machine Learn- ing Research, 11:1109-1135.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "NCBI disease corpus: A resource for disease name recognition and concept normalization",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Biomedical Informatics",
"volume": "47",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. NCBI disease corpus: A resource for disease name recognition and concept normalization. Journal of Biomedical Informatics, 47:1-10.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Retrofitting word vectors to semantic lexicons",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Sujay",
"middle": [],
"last": "Kumar Jauhar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1606--1615",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1184"
]
},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1606-1615, Denver, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Conceptual grounding constraints for truly robust biomedical name representations",
"authors": [
{
"first": "Pieter",
"middle": [],
"last": "Fivez",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Suster",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "2440--2450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pieter Fivez, Simon Suster, and Walter Daelemans. 2021a. Conceptual grounding constraints for truly robust biomedical name representations. In Proceed- ings of the 16th Conference of the European Chap- ter of the Association for Computational Linguistics: Main Volume, pages 2440-2450, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Integrating higher-level semantics into robust biomedical name representations",
"authors": [
{
"first": "Pieter",
"middle": [],
"last": "Fivez",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Suster",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis",
"volume": "",
"issue": "",
"pages": "49--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pieter Fivez, Simon Suster, and Walter Daelemans. 2021b. Integrating higher-level semantics into ro- bust biomedical name representations. In Proceed- ings of the 12th International Workshop on Health Text Mining and Information Analysis, pages 49-58, online. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Syntactic analyses and named entity recognition for PubMed and PubMed Central -up-to-the-minute",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Hakala",
"suffix": ""
},
{
"first": "Suwisa",
"middle": [],
"last": "Kaewphan",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 15th Workshop on Biomedical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Hakala, Suwisa Kaewphan, Tapio Salakoski, and Filip Ginter. 2016. Syntactic analyses and named entity recognition for PubMed and PubMed Central -up-to-the-minute. Proceedings of the 15th Work- shop on Biomedical Natural Language Processing, pages 102-107.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference for Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference for Learning Representations (ICLR).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btz682"
]
},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Counter-fitting word vectors to linguistic constraints",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "\u00d3",
"middle": [],
"last": "Diarmuid",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Lina",
"middle": [
"M"
],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "142--148",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1018"
]
},
"num": null,
"urls": [],
"raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid \u00d3 S\u00e9aghdha, Blaise Thom- son, Milica Ga\u0161i\u0107, Lina M. Rojas-Barahona, Pei- Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142-148, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Corpus domain effects on distributional semantic modeling of medical terms",
"authors": [
{
"first": "V",
"middle": [
"S"
],
"last": "Serguei",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Pakhomov",
"suffix": ""
},
{
"first": "Reed",
"middle": [],
"last": "Finley",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Mcewan",
"suffix": ""
},
{
"first": "Genevieve",
"middle": [
"B"
],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Melton",
"suffix": ""
}
],
"year": 2016,
"venue": "Bioinformatics",
"volume": "32",
"issue": "23",
"pages": "3635--3644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Serguei V.S. Pakhomov, Greg Finley, Reed McE- wan, Yan Wang, and Genevieve B. Melton. 2016. Corpus domain effects on distributional seman- tic modeling of medical terms. Bioinformatics, 32(23):3635-3644.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Towards a framework for developing semantic relatedness reference standards",
"authors": [
{
"first": "V",
"middle": [
"S"
],
"last": "Serguei",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pakhomov",
"suffix": ""
},
{
"first": "Bridget",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Genevieve",
"middle": [
"B"
],
"last": "Mcinnes",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Melton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"G"
],
"last": "Ruggieri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chute",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Biomedical Informatics",
"volume": "44",
"issue": "",
"pages": "251--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Serguei V.S. Pakhomov, Ted Pedersen, Bridget McInnes, Genevieve B. Melton, Alexander Rug- gieri, and Christopher G. Chute. 2011. Towards a framework for developing semantic relatedness ref- erence standards. Journal of Biomedical Informat- ics, 44:251-265.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Py-Torch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- Torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Robust representation learning of biomedical names",
"authors": [
{
"first": "Minh",
"middle": [
"C"
],
"last": "Phan",
"suffix": ""
},
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Tay",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3275--3285",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1317"
]
},
"num": null,
"urls": [],
"raw_text": "Minh C. Phan, Aixin Sun, and Yi Tay. 2019. Ro- bust representation learning of biomedical names. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3275-3285, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Biomedical concept relatedness -a large EHR-based benchmark",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Levy-Kramer",
"suffix": ""
},
{
"first": "Camille",
"middle": [],
"last": "Van Assel",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6565--6575",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.577"
]
},
"num": null,
"urls": [],
"raw_text": "Claudia Schulz, Josh Levy-Kramer, Camille Van Assel, Miklos Kepes, and Nils Hammerla. 2020. Biomedi- cal concept relatedness -a large EHR-based bench- mark. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 6565- 6575, Barcelona, Spain (Online). International Com- mittee on Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Baseline needs more love: On simple wordembedding based models and associated pooling mechanisms",
"authors": [
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Guoyin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wenlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Renqiang Min",
"suffix": ""
},
{
"first": "Qinliang",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers)",
"volume": "",
"issue": "",
"pages": "440--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dinghan Shen, Guoyin Wang, Wenlin Wang, Mar- tin Renqiang Min, Qinliang Su, Yizhe Zhang, Chun- yuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple word- embedding based models and associated pooling mechanisms. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Long Papers), pages 440-450.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Pattern for Python",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "Smedt",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Machine Learning Research",
"volume": "13",
"issue": "",
"pages": "2031--2035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom De Smedt and Walter Daelemans. 2012. Pattern for Python. Journal of Machine Learning Research, 13:2031-2035.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Biomedical entity representations with synonym marginalization",
"authors": [
{
"first": "Mujeen",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Hwisang",
"middle": [],
"last": "Jeon",
"suffix": ""
},
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3641--3650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mujeen Sung, Hwisang Jeon, Jinhyuk Lee, and Jaewoo Kang. 2020. Biomedical entity representations with synonym marginalization. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 3641-3650, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "No training required: Exploring random encoders for sentence classification",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting and Douwe Kiela. 2019. No train- ing required: Exploring random encoders for sen- tence classification. In International Conference on Learning Representations.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Sampling matters in deep embedding learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Chao-Yuan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Manmatha",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krahenbuhl",
"suffix": ""
}
],
"year": 2017,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao-Yuan Wu, R. Manmatha, Alexander J. Smola, and Philipp Krahenbuhl. 2017. Sampling matters in deep embedding learning. In ICCV.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Few-shot performance for fastText encoders on MayoSRS, averaged over 5 random samples. concepts in the UMLS, containing 156K disease names.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"num": null,
"html": null,
"text": "Example of how reference names are grouped together within the ICD-10 hierarchy of disorders.",
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Chapter V: Mental and behavioural disorders</td></tr><tr><td>F34</td><td>F63</td></tr><tr><td colspan=\"2\">Persistent mood disorders Habit and impulse disorders</td></tr><tr><td>F34.0</td><td>F63.0</td></tr><tr><td>Cyclothymia</td><td>Pathological gambling</td></tr><tr><td>F34.1</td><td>F63.1</td></tr><tr><td>Dysthymia</td><td>Pyromania</td></tr></table>"
},
"TABREF1": {
"num": null,
"html": null,
"text": "Descriptive statistics about the number of names per concept for our training data.",
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"num": null,
"html": null,
"text": "A comparison between the rankings of 315 SNOMED-CT training names for the validation mention urinary hesitancy. Non-matching names are underlined. While the pretrained BNE model makes various topical associations, our 15-shot model using the BNE representations as input has learned to cluster around the semantics of urinary tract disorders.",
"type_str": "table",
"content": "<table/>"
}
}
}
}