ACL-OCL / Base_JSON /prefixB /json /bionlp /2021.bionlp-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:07:42.640108Z"
},
"title": "Stress Test Evaluation of Biomedical Word Embeddings",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Araujo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pontificia Universidad Cat\u00f3lica",
"location": {
"country": "Chile"
}
},
"email": ""
},
{
"first": "Andr\u00e9s",
"middle": [],
"last": "Carvallo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pontificia Universidad Cat\u00f3lica",
"location": {
"country": "Chile"
}
},
"email": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Aspillaga",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pontificia Universidad Cat\u00f3lica",
"location": {
"country": "Chile"
}
},
"email": ""
},
{
"first": "Camilo",
"middle": [],
"last": "Thorne",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Denis",
"middle": [],
"last": "Parra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pontificia Universidad Cat\u00f3lica",
"location": {
"country": "Chile"
}
},
"email": "dparra@ing.puc.cl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The success of pretrained word embeddings has motivated their use in the biomedical domain, with contextualized embeddings yielding remarkable results in several biomedical NLP tasks. However, there is a lack of research on quantifying their behavior under severe \"stress\" scenarios. In this work, we systematically evaluate three language models with adversarial examples-automatically constructed tests that allow us to examine how robust the models are. We propose two types of stress scenarios focused on the biomedical named entity recognition (NER) task, one inspired by spelling errors and another based on the use of synonyms for medical terms. Our experiments with three benchmarks show that the performance of the original models decreases considerably, in addition to revealing their weaknesses and strengths. Finally, we show that adversarial training causes the models to improve their robustness and even to exceed the original performance in some cases.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The success of pretrained word embeddings has motivated their use in the biomedical domain, with contextualized embeddings yielding remarkable results in several biomedical NLP tasks. However, there is a lack of research on quantifying their behavior under severe \"stress\" scenarios. In this work, we systematically evaluate three language models with adversarial examples-automatically constructed tests that allow us to examine how robust the models are. We propose two types of stress scenarios focused on the biomedical named entity recognition (NER) task, one inspired by spelling errors and another based on the use of synonyms for medical terms. Our experiments with three benchmarks show that the performance of the original models decreases considerably, in addition to revealing their weaknesses and strengths. Finally, we show that adversarial training causes the models to improve their robustness and even to exceed the original performance in some cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Biomedical NLP (BioNLP) is the field concerned with developing NLP tools and methods for the life sciences domain. Some applications of these techniques include e.g., discovery of gene-disease interactions (Pletscher-Frankild et al., 2015) , development of new drugs (Tari et al., 2010) , or automatic screening of biomedical documents . With the exponential growth of digital biomedical literature, the importance of BioNLP has become especially relevant as a tool to extract relevant knowledge for making decisions in clinical settings as well as in public health. In order to encourage the development of this area, public datasets and challenges have been shared with the community to solve these tasks, such as BioSSES (Soganc\u0131oglu et al., 2017) , HOC (Hanahan and Weinberg, 2000) , ChemProt (Kringelum et al., 2016) and BC5CDR (Li et al., 2016) , among others. At the same time, neural language models have shown significant progress since the introduction of models such as W2V (Mikolov et al., 2013) , and more recent models like ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) . These models, trained over large corpora (MED-LINE and PubMed in the biomedical domain) have obtained remarkable results in most NLP tasks, including BioNLP benchmarks (Peng et al., 2019) . However, they have not been systematically evaluated under severe stress conditions to test their robustness to specific linguistic phenomena. For this reason, the objective of this paper is to evaluate three well-known neural language models under stress conditions. As a case study, we evaluate NER benchmarks since it a key BioNLP information extraction task.",
"cite_spans": [
{
"start": 206,
"end": 239,
"text": "(Pletscher-Frankild et al., 2015)",
"ref_id": "BIBREF26"
},
{
"start": 267,
"end": 286,
"text": "(Tari et al., 2010)",
"ref_id": "BIBREF33"
},
{
"start": 724,
"end": 750,
"text": "(Soganc\u0131oglu et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 757,
"end": 785,
"text": "(Hanahan and Weinberg, 2000)",
"ref_id": "BIBREF11"
},
{
"start": 797,
"end": 821,
"text": "(Kringelum et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 833,
"end": 850,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 985,
"end": 1007,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF20"
},
{
"start": 1043,
"end": 1064,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 1074,
"end": 1095,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 1139,
"end": 1152,
"text": "(MED-LINE and",
"ref_id": null
},
{
"start": 1266,
"end": 1285,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our stress test evaluation is inspired by the work of Naik et al. (2018) , which proposes the use of adversarial evaluation for natural language inference by adding distractions in sentences, and evaluating models on this test set. We propose an adversarial evaluation black-box methodology, which does not require access to the inner workings of the models in order to generate adversarial examples (Zhang et al., 2019) . Specifically, we make perturbations to the input data, also known as edit adversaries, that could cause the models to fall into erroneous predictions. Additionally, we train the models with the proposed adversarial examples, which is a methodology used in previous works (Belinkov and Bisk, 2018; Jia and Liang, 2017) to strengthen the neural language models during the training process. We hope that our work will motivate the development and use of adversarial examples to evaluate models and obtain more robust biomedical embeddings.",
"cite_spans": [
{
"start": 54,
"end": 72,
"text": "Naik et al. (2018)",
"ref_id": "BIBREF21"
},
{
"start": 400,
"end": 420,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 694,
"end": 719,
"text": "(Belinkov and Bisk, 2018;",
"ref_id": "BIBREF4"
},
{
"start": 720,
"end": 740,
"text": "Jia and Liang, 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Adversarial Evaluation of NLP Models One way to test NLP models is by using adversarial tests, which consist of applying intentional distur-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Linoleic acid autoxidation inhibitions on all fractions were higher than that on alpha-tocopherol. Keyboard (K) Linoleic avid autoxidatiob inh9bitions on all fractjons were higher than that on zlpha-toclpherol.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original (O)",
"sec_num": null
},
{
"text": "Linoleic aicd autoxidtaion inhibtiions on all fractoins were higher than that on aplha-tocohperol. Synonymy (S) Linoleic acid autoxidation inhibitions on all fractions were higher than that on vitamin E. bances to a gold standard, to test whether the attack leads the models into incorrect predictions. Previous works on adversarial attacks have demonstrated how dangerous it can be to use machine learning systems in real-world applications . Indeed, it is known that even small amounts of noise can cause severe failures in neural computer vision models (Akhtar and Mian, 2018) . However, such failures can be mitigated through adversarial training . These properties have in turn motivated novel adversarial strategies designed for various NLP tasks (Zhang et al., 2019) , as well as work on adversarial attacks focused on recurrent and transformer networks applied to generic NLP benchmarks (Aspillaga et al., 2020) .",
"cite_spans": [
{
"start": 556,
"end": 579,
"text": "(Akhtar and Mian, 2018)",
"ref_id": "BIBREF1"
},
{
"start": 753,
"end": 773,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 895,
"end": 919,
"text": "(Aspillaga et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Swap (W)",
"sec_num": null
},
{
"text": "Evaluation of Biomedical Models Models used in BioNLP tasks elicit particular interest in this context because an erroneous prediction can potentially be very harmful in practice -e.g., put at risk the health of patients (Sun et al., 2018) . Although adversarial attacks have been widely studied in tasks related to image analysis (Paschali et al., 2018; Finlayson et al., 2019; Ma et al., 2019) , to the best of our knowledge, a gap still exists regarding BioNLP models and tasks .",
"cite_spans": [
{
"start": 221,
"end": 239,
"text": "(Sun et al., 2018)",
"ref_id": "BIBREF31"
},
{
"start": 331,
"end": 354,
"text": "(Paschali et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 355,
"end": 378,
"text": "Finlayson et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 379,
"end": 395,
"text": "Ma et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Swap (W)",
"sec_num": null
},
{
"text": "We follow a black-box attack methodology (Zhang et al., 2019) , which consists of making alterations in the input data to cause erroneous predictions in the models. The following subsections describe each of the adversarial sets, and their construction 1 . We show examples of the stress tests in Table 1 .",
"cite_spans": [
{
"start": 41,
"end": 61,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 297,
"end": 304,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Noise Adversaries These adversaries test the robustness of models to spelling errors. Inspired by (Belinkov and Bisk, 2018) , we constructed adversarial examples that try to emulate spelling errors made by human beings. We used SpaCy models (Neumann et al., 2019) to retrieve the medical words of each corpus and add noise to them. We used two types of alterations: i) Keyboard typo noise (K) involves replacing a random character in each relevant word with an adjacent character on QWERTY English keyboards. This methodology could be adapted to keyboards with other designs or languages. ii) Swap noise (W) consists of selecting a random pair of consecutive characters in each relevant word and then swapping them.",
"cite_spans": [
{
"start": 98,
"end": 123,
"text": "(Belinkov and Bisk, 2018)",
"ref_id": "BIBREF4"
},
{
"start": 241,
"end": 263,
"text": "(Neumann et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "These adversaries test if a model can understand synonymy relations. Unlike the noise adversaries, this set focuses on modifying chemical and disease words (entities). We used PyMedTermino (Jean-Baptiste et al., 2015), which uses the vocabulary of UMLS (Bodenreider, 2004), to find the most similar or related term (synonym) to a certain word. If a synonym is retrieved, the original word is replaced; otherwise, it remains the same. In some cases, this method changes a simple entity (one word) to a composite one (multiple words), so the gold labels are also adjusted to avoid a mismatch in the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonymy Adversaries (S)",
"sec_num": null
},
{
"text": "Task and Datasets Biomedical NER is the task that aims at detecting biomedical entities of interest such as proteins, cell types, chemicals, or diseases in biomedical documents. We conducted our evaluation on three biomedical NER benchmarks using the IOB2 tag format (Ramshaw and Marcus, 1999). The BC5CDR corpus (Li et al., 2016) is composed of mentions of chemicals and diseases found in 1,500 PubMed articles. The BC4CHEMD corpus (Krallinger et al., 2015) contains mentions of chemicals and drugs from 10,000 MEDLINE abstracts. The NCBI-Disease corpus (Dogan et al., 2014) consists of 793 PubMed abstracts annotated with disease mentions. Table 2 lists the datasets used in this work along with their most relevant statistics.",
"cite_spans": [
{
"start": 313,
"end": 330,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 433,
"end": 458,
"text": "(Krallinger et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 642,
"end": 649,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Synonymy Adversaries (S)",
"sec_num": null
},
{
"text": "Embeddings and NER Models We evaluated both word (W2V) and contextualized embeddings. On the one hand, we assessed BioMedical W2V (Pyysalo et al., 2013) and ChemPatent W2V (Zhai et al., 2019) . The ChemPatent embeddings were trained on a 1.1 billion word corpus of chemical patents from 7 patent offices, whereas all the other embeddings were trained on the PubMed corpus. On the other hand, we evaluated BioBERT v1.1 and BlueBERT (P) (Peng et al., 2019) , both in their base version for convenience. Table 3 : Stress test evaluation results in terms of terms F1-score for each model and dataset. We report means and standard deviations by training and evaluating ten times with different seeds.",
"cite_spans": [
{
"start": 130,
"end": 152,
"text": "(Pyysalo et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 172,
"end": 191,
"text": "(Zhai et al., 2019)",
"ref_id": "BIBREF35"
},
{
"start": 435,
"end": 454,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 501,
"end": 508,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synonymy Adversaries (S)",
"sec_num": null
},
{
"text": "BioBERT embeddings were trained on PubMed abstracts and full-text corpora consisting of 4.3 billion and 13.5 billion words each. BlueBERT was trained on 4 billion words from PubMed abstracts. We used the implementation provided by Peng et al. (2019) for NER with default hyperparameters. 2 Finally, we evaluate BioELMo (Jin et al., 2019) and ChemPatent ELMo (Zhai et al., 2019) . As NER models we either (a) fine-tuned BERT as proposed by Peng et al. (2019) or (b) used AllenNLP's basic biLSTM-CRF implementation 3 , with no hyperparameter tuning other than changing the initial embedding layer with one of the ELMo or W2V embeddings. For comparison purposes, we also include the \"vanilla\" version of the models mentioned above, which are pretrained with general corpora. We trained each model 10 times using different random seeds, for 15 epochs every time. We use CoNLL evaluation (Agirre and Soroa, 2007) , reporting the F1 score for all datasets.",
"cite_spans": [
{
"start": 231,
"end": 249,
"text": "Peng et al. (2019)",
"ref_id": "BIBREF24"
},
{
"start": 319,
"end": 337,
"text": "(Jin et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 358,
"end": 377,
"text": "(Zhai et al., 2019)",
"ref_id": "BIBREF35"
},
{
"start": 439,
"end": 457,
"text": "Peng et al. (2019)",
"ref_id": "BIBREF24"
},
{
"start": 883,
"end": 907,
"text": "(Agirre and Soroa, 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synonymy Adversaries (S)",
"sec_num": null
},
{
"text": "In this section we report the results of our experiments. Note that all percentage drops or increases are expressed relative to the original score, not as percentage points. Table 3 shows the evaluation results on the original (O) and adversarial test sets (K, W, and S). In general, the performance of models drops across all adversarial attacks. For BERT-based models, we observe that K attacks decrease performance by on average 43.1%, W by 34.3% and S by 30.8%. BioBERT has the smallest decrease in performance, 34.4%, followed by BlueBERT, with a 37.9% decrease. We hypothesize that BioBERT is more robust than Blue-BERT since the former was trained on a larger and more varied corpus. Furthermore, when comparing the performance across all datasets, we see that BC5CDR-Disease is the most affected in all stress tests, with a 37.7% performance drop, and the least affected is BC5CDR-Chemical, with 16.1%. The performance reduction of ELMo-based models is similar to those of BERT-based models. An exception is when subject to W and S noise, where they showed increased robustness with respect to BERT and W2V models (W: 55.3% better, S: 6.9% better). In almost all the tests, BioELMo performed better than ChemPatent ELMo, except under W noise, where ChemPatent ELMo performed con- sistently better, by 5.1% on average. We hypothesize that these results are due to ELMo using a character-based input representation, which would allow handling of swap characters inside the words. W2V-based models were the most brittle but showed similar patterns to the previous models. Adversaries examples produced performance drops ranging from 53.8% on NCBI-Disease to 74.1% on BC5CDR-Disease. In the case of S adversaries, W2V-based showed performance drops ranging from 17.8% on BC5CDR-Chemical to 55.3% on BC5CDR-Disease.",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 181,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Regarding the \"vanilla\" models, we see that they are all the worst in the original dataset (O) compared to their biomedical counterparts. In the same way, they are more fragile to adversary attacks in the biomedical scenario. In average, BERT has a decrease in performance of 39.6%, ELMo of 34.4% and W2V of 59.6% across all datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Evaluation Results",
"sec_num": null
},
{
"text": "Even though the BC5CDR dataset covers both chemicals and diseases, the disease task is more affected by S adversaries. We believe this is due to the higher number of words affected by the attacks compared to the other benchmarks (Table 2 ). Another possible cause is the kind of synonyms used to replace the entities, which tend to be both superficially dissimilar and more extensive than their originals, e.g., arrhythmia is replaced by heart conduction disorder. By contrast, chemical synonyms often include terms derived from the original, e.g., morphine is changed to morphine sulfate.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "(Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Adversarial Evaluation Results",
"sec_num": null
},
{
"text": "Training on Adversarial Examples Additionally, we subjected the training sets to adversar-ial attacks, and evaluated the models both against the original test sets and their noisy counterparts. When training with K noise, we observed performance decreases by 21.2%, followed by W, 15.8%, and S with a slight decline of 0.8%, compared to 44.4%, 46.3% and 31.3% respectively in the Adversarial Evaluation setting. Besides, and interestingly, training with S improves performance in some cases, by up to 5.5% compared to the original S test set. We hypothesize that this is because the introduced adversarial samples work as a data augmentation mechanism. In terms of datasets, we see that BC5CDR-Disease is the most affected by adversaries, with an average 17.5% drop, and the least affected is NCBI-Disease, with an average 9.7% drop compared to the non-adversarial test set. When comparing the three architectures we see that BERT is affected by 6.3%, ELMo by 7.6% and W2V by 24.0% on average compared to the original test set. This result stands in line with findings on other NLP tasks, where BERT comes up first, followed by ELMo and W2V (Peng et al., 2019) . This is because BERT uses recent methods and techniques like Transformer (Vaswani et al., 2017) and WordPiece tokenizer (Schuster and Nakajima, 2012) that allow it to learn better representations.",
"cite_spans": [
{
"start": 1141,
"end": 1160,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 1236,
"end": 1258,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF34"
},
{
"start": 1283,
"end": 1312,
"text": "(Schuster and Nakajima, 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Evaluation Results",
"sec_num": null
},
{
"text": "BioBERT Error Analysis This section seeks to understand how the most robust model -BioBERT -behaves under adversarial evaluation. To this end, we analyzed NER model confusions with respect to the original datasets, synonym (S), swap (W), and keyboard (K) perturbations on the BC5CDR chemical and disease dataset(s).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Evaluation Results",
"sec_num": null
},
{
"text": "In the original dataset (Figure 1(a) ), we see that most of the errors come from confusing I and O labels (32% of the cases). Under adversarial attacks, this type of error spreads to other IOB labels. For keyboard (K) errors (Figure 1(b) ), the most frequent mistake is to confuse B with O, with 16.6% of these cases. The same goes for swap (W) perturbations (Figure 1(c) ), where this error is repeated 15% of the time. When using synonyms (S) (Figure 1(d) ), error rates become by contrast globally low compared to K and W. We believe that this happens because entities are converted into similar ones. For instance, \"stomach neoplasm\" gets transformed into \"stomach tumor\". Lastly, regardless of the adversaries, there are confusions with numbers and special character sequences that the model classifies as I (i.e., lie inside an entity span) but whose ground truth label is O (i.e., lie outside an entity span).",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 36,
"text": "(Figure 1(a)",
"ref_id": "FIGREF0"
},
{
"start": 225,
"end": 237,
"text": "(Figure 1(b)",
"ref_id": "FIGREF0"
},
{
"start": 359,
"end": 371,
"text": "(Figure 1(c)",
"ref_id": "FIGREF0"
},
{
"start": 445,
"end": 457,
"text": "(Figure 1(d)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Adversarial Evaluation Results",
"sec_num": null
},
{
"text": "In this work, we have investigated whether large scale biomedical word (W2V) and contextualized word embeddings (BERT and ELMo) are robust with respect to black-box adversarial attacks in the biomedical NER task. Our experimental results show different sensitivities of the models to misspellings and synonyms. Among the main findings, we show that BERT-based models are generally better prepared for adversarial attacks, but they are still fragile, leaving room for future improvement in the field. ELMo-based models show lower robustness in most cases but consistently outperformed BERT in some specific scenarios. W2V proves to be more brittle but shows similar patterns in terms of relative performance drops. We also demonstrate that by training with adversaries, we can considerably decrease the drop in performance and even improve the models' original performance when trained with synonyms, as they act as a form of regularization and augmentation of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "All stress tests available at https://github.com/ialabpuc/BioNLP-StressTest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/ncbi-nlp/bluebert 3 https://github.com/allenai/allennlp-models",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to the anonymous reviewers for their valuable feedback on earlier versions of this paper. This work was partially funded by ANID -Millennium Science Initiative Program -Code ICN17_002 and by ANID, FONDECYT grant 1191791, as well as supported by the TPU Research Cloud (TRC) program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2007 task 02: Evaluating word sense induction and discrimination systems",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the fourth international workshop on semantic evaluations (semeval-2007)",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre and Aitor Soroa. 2007. Semeval-2007 task 02: Evaluating word sense induction and dis- crimination systems. In Proceedings of the fourth international workshop on semantic evaluations (semeval-2007), pages 7-12.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Threat of adversarial attacks on deep learning in computer vision: A survey",
"authors": [
{
"first": "Naveed",
"middle": [],
"last": "Akhtar",
"suffix": ""
},
{
"first": "Ajmal",
"middle": [],
"last": "Mian",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Access",
"volume": "6",
"issue": "",
"pages": "14410--14430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naveed Akhtar and Ajmal Mian. 2018. Threat of adver- sarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410-14430.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Adversarial evaluation of bert for biomedical named entity recognition",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Araujo",
"suffix": ""
},
{
"first": "Andr\u00e9s",
"middle": [],
"last": "Carvallo",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Parra",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the The Fourth Widening Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Araujo, Andr\u00e9s Carvallo, and Denis Parra. 2020. Adversarial evaluation of bert for biomedi- cal named entity recognition. In Proceedings of the The Fourth Widening Natural Language Processing Workshop, Seattle, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Stress test evaluation of transformerbased models in natural language understanding tasks",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Aspillaga",
"suffix": ""
},
{
"first": "Andr\u00e9s",
"middle": [],
"last": "Carvallo",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Araujo",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "1882--1894",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos Aspillaga, Andr\u00e9s Carvallo, and Vladimir Araujo. 2020. Stress test evaluation of transformer- based models in natural language understanding tasks. In Proceedings of The 12th Language Re- sources and Evaluation Conference, pages 1882- 1894, Marseille, France. European Language Re- sources Association.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine transla- tion. In International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The unified medical language system (UMLS): integrating biomedical terminology",
"authors": [
{
"first": "O",
"middle": [],
"last": "Bodenreider",
"suffix": ""
}
],
"year": 2004,
"venue": "Nucleic Acids Research",
"volume": "32",
"issue": "90001",
"pages": "267--270",
"other_ids": {
"DOI": [
"10.1093/nar/gkh061"
]
},
"num": null,
"urls": [],
"raw_text": "O. Bodenreider. 2004. The unified medical language system (UMLS): integrating biomedical terminol- ogy. Nucleic Acids Research, 32(90001):267D- 270.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic document screening of medical literature using word and text embeddings in an active learning setting",
"authors": [
{
"first": "Andres",
"middle": [],
"last": "Carvallo",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Parra",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Lobel",
"suffix": ""
},
{
"first": "Alvaro",
"middle": [],
"last": "Soto",
"suffix": ""
}
],
"year": 2020,
"venue": "Scientometrics",
"volume": "125",
"issue": "3",
"pages": "3047--3084",
"other_ids": {
"DOI": [
"10.1007/s11192-020-03648-6"
]
},
"num": null,
"urls": [],
"raw_text": "Andres Carvallo, Denis Parra, Hans Lobel, and Al- varo Soto. 2020. Automatic document screening of medical literature using word and text embed- dings in an active learning setting. Scientometrics, 125(3):3047-3084.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "NCBI disease corpus: A resource for disease name recognition and concept normalization",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Biomedical Informatics",
"volume": "47",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2013.12.006"
]
},
"num": null,
"urls": [],
"raw_text": "Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. NCBI disease corpus: A resource for disease name recognition and concept normalization. Journal of Biomedical Informatics, 47:1-10.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adversarial attacks on medical machine learning",
"authors": [
{
"first": "G",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "John",
"middle": [
"D"
],
"last": "Finlayson",
"suffix": ""
},
{
"first": "Joichi",
"middle": [],
"last": "Bowers",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"L"
],
"last": "Ito",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"L"
],
"last": "Zittrain",
"suffix": ""
},
{
"first": "Isaac",
"middle": [
"S"
],
"last": "Beam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kohane",
"suffix": ""
}
],
"year": 2019,
"venue": "Science",
"volume": "363",
"issue": "6433",
"pages": "1287--1289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel G. Finlayson, John D. Bowers, Joichi Ito, Jonathan L. Zittrain, Andrew L. Beam, and Isaac S. Kohane. 2019. Adversarial attacks on medical ma- chine learning. Science, 363(6433):1287-1289.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Explaining and harnessing adversarial examples",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ian",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6572"
]
},
"num": null,
"urls": [],
"raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The hallmarks of cancer",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Hanahan",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"A"
],
"last": "Weinberg",
"suffix": ""
}
],
"year": 2000,
"venue": "Cell",
"volume": "100",
"issue": "1",
"pages": "57--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Hanahan and Robert A. Weinberg. 2000. The hallmarks of cancer. Cell 100(1), pages 57-70.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pymedtermino: an open-source generic api for advanced terminology services",
"authors": [
{
"first": "Lamy",
"middle": [],
"last": "Jean-Baptiste",
"suffix": ""
},
{
"first": "Venot",
"middle": [],
"last": "Alain",
"suffix": ""
},
{
"first": "Duclos",
"middle": [],
"last": "Catherine",
"suffix": ""
}
],
"year": 2015,
"venue": "Studies in Health Technology and Informatics",
"volume": "210",
"issue": "",
"pages": "924--928",
"other_ids": {
"DOI": [
"10.3233/978-1-61499-512-8-924"
]
},
"num": null,
"urls": [],
"raw_text": "Lamy Jean-Baptiste, Venot Alain, and Duclos Cather- ine. 2015. Pymedtermino: an open-source generic api for advanced terminology services. Studies in Health Technology and Informatics, 210(Digital Healthcare Empowering Europeans):924-928.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2021--2031",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1215"
]
},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Probing biomedical embeddings from language models",
"authors": [
{
"first": "Qiao",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Xinghua",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "82--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiao Jin, Bhuwan Dhingra, William Cohen, and Xinghua Lu. 2019. Probing biomedical embeddings from language models. In Proceedings of the 3rd Workshop on Evaluating Vector Space Representa- tions for NLP, pages 82-89.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Julen Oyarzabal, and Alfonso Valencia",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
},
{
"first": "Obdulia",
"middle": [],
"last": "Rabal",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Leitner",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vazquez",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Salgado",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Yanan",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Donghong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Roger",
"middle": [
"A"
],
"last": "Lowe",
"suffix": ""
},
{
"first": "Riza",
"middle": [
"Theresa"
],
"last": "Sayle",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Batista-Navarro",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Rak",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Huber",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Matos",
"suffix": ""
},
{
"first": "Buzhou",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Tsendsuren",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Keun",
"middle": [],
"last": "Munkhdalai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ho Ryu",
"suffix": ""
},
{
"first": "Senthil",
"middle": [],
"last": "Sv Ramanan",
"suffix": ""
},
{
"first": "Slavko",
"middle": [],
"last": "Nathan",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "\u017ditnik",
"suffix": ""
},
{
"first": "Lutz",
"middle": [],
"last": "Bajec",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Weber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Irmer",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saber",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"A"
],
"last": "Akhondi",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Kors",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "An",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Kumar Sikdar",
"suffix": ""
},
{
"first": "Masaharu",
"middle": [],
"last": "Ekbal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yoshioka",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Thaer",
"suffix": ""
},
{
"first": "Miji",
"middle": [],
"last": "Dieb",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Madian",
"middle": [],
"last": "Verspoor",
"suffix": ""
},
{
"first": "Lee",
"middle": [],
"last": "Khabsa",
"suffix": ""
},
{
"first": "Hongfang",
"middle": [],
"last": "Giles",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "The CHEMDNER corpus of chemicals and drugs and its annotation principles",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/1758-2946-7-s1-s2"
]
},
"num": null,
"urls": [],
"raw_text": "Martin Krallinger, Obdulia Rabal, Florian Leitner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, Daniel M Lowe, Roger A Sayle, Riza Theresa Batista-Navarro, Rafal Rak, Torsten Huber, Tim Rockt\u00e4schel, S\u00e9r- gio Matos, David Campos, Buzhou Tang, Hua Xu, Tsendsuren Munkhdalai, Keun Ho Ryu, SV Ra- manan, Senthil Nathan, Slavko \u017ditnik, Marko Ba- jec, Lutz Weber, Matthias Irmer, Saber A Akhondi, Jan A Kors, Shuo Xu, Xin An, Utpal Kumar Sik- dar, Asif Ekbal, Masaharu Yoshioka, Thaer M Dieb, Miji Choi, Karin Verspoor, Madian Khabsa, C Lee Giles, Hongfang Liu, Komandur Elayavilli Raviku- mar, Andre Lamurias, Francisco M Couto, Hong- Jie Dai, Richard Tzong-Han Tsai, Caglar Ata, Tolga Can, Anabel Usi\u00e9, Rui Alves, Isabel Segura-Bedmar, Paloma Mart\u00ednez, Julen Oyarzabal, and Alfonso Va- lencia. 2015. The CHEMDNER corpus of chemi- cals and drugs and its annotation principles. Journal of Cheminformatics, 7(S1).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Chemprot-3.0: a global chemical biology diseases mapping",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kringelum",
"suffix": ""
},
{
"first": "S",
"middle": [
"K"
],
"last": "Kjaerulff",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Brunak",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Lund",
"suffix": ""
},
{
"first": "T",
"middle": [
"I"
],
"last": "Oprea",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Taboureau",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Kringelum, S. K. Kjaerulff, S. Brunak, O. Lund, T. I. Oprea, and O. Taboureau. 2016. Chemprot- 3.0: a global chemical biology diseases mapping. Database.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btz682"
]
},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BioCreative v CDR task corpus: a resource for chemical disease relation extraction",
"authors": [
{
"first": "Jiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yueping",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Robin",
"middle": [
"J"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Sciaky",
"suffix": ""
},
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Allan",
"middle": [
"Peter"
],
"last": "Davis",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Mattingly",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"C"
],
"last": "Wiegers",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1093/database/baw068"
]
},
"num": null,
"urls": [],
"raw_text": "Jiao Li, Yueping Sun, Robin J. Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J. Mattingly, Thomas C. Wiegers, and Zhiyong Lu. 2016. BioCreative v CDR task cor- pus: a resource for chemical disease relation extrac- tion. Database, 2016:baw068.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Understanding adversarial attacks on deep learning based medical image analysis systems",
"authors": [
{
"first": "Xingjun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Yisen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yitian",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bailey",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xingjun Ma, Yuhao Niu, Lin Gu, Yisen Wang, Yitian Zhao, James Bailey, and Feng Lu. 2019. Under- standing adversarial attacks on deep learning based medical image analysis systems.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Stress test evaluation for natural language inference",
"authors": [
{
"first": "Aakanksha",
"middle": [],
"last": "Naik",
"suffix": ""
},
{
"first": "Abhilasha",
"middle": [],
"last": "Ravichander",
"suffix": ""
},
{
"first": "Norman",
"middle": [],
"last": "Sadeh",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2340--2353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340-2353, Santa Fe, New Mexico, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Scispacy: Fast and robust models for biomedical natural language processing",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
}
],
"year": 2019,
"venue": "SciSpacy:Fast and Robust Models for Biomedical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing. In SciS- pacy:Fast and Robust Models for Biomedical Natu- ral Language Processing.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Generalizability vs. robustness: Investigating medical imaging networks using adversarial examples",
"authors": [
{
"first": "Magdalini",
"middle": [],
"last": "Paschali",
"suffix": ""
},
{
"first": "Sailesh",
"middle": [],
"last": "Conjeti",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Navarro",
"suffix": ""
},
{
"first": "Nassir",
"middle": [],
"last": "Navab",
"suffix": ""
}
],
"year": 2018,
"venue": "Medical Image Computing and Computer Assisted Intervention -MICCAI 2018",
"volume": "",
"issue": "",
"pages": "493--501",
"other_ids": {
"DOI": [
"10.1007/978-3-030-00928-1_56"
]
},
"num": null,
"urls": [],
"raw_text": "Magdalini Paschali, Sailesh Conjeti, Fernando Navarro, and Nassir Navab. 2018. Generalizability vs. robust- ness: Investigating medical imaging networks using adversarial examples. In Medical Image Comput- ing and Computer Assisted Intervention -MICCAI 2018, pages 493-501. Springer International Pub- lishing.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Shankai",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "58--65",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5006"
]
},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 58- 65, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Diseases: Text mining and data integration of disease-gene associations",
"authors": [
{
"first": "Sune",
"middle": [],
"last": "Pletscher-Frankild",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Pallej\u00e0",
"suffix": ""
},
{
"first": "Kalliopi",
"middle": [],
"last": "Tsafou",
"suffix": ""
},
{
"first": "Janos",
"middle": [
"X"
],
"last": "Binder",
"suffix": ""
},
{
"first": "Lars",
"middle": [
"Juhl"
],
"last": "Jensen",
"suffix": ""
}
],
"year": 2015,
"venue": "Methods",
"volume": "74",
"issue": "",
"pages": "83--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sune Pletscher-Frankild, Albert Pallej\u00e0, Kalliopi Tsafou, Janos X Binder, and Lars Juhl Jensen. 2015. Diseases: Text mining and data integration of disease-gene associations. Methods, 74:83-89.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Distributional semantics resources for biomedical text processing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Moen",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Salakoski",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of LBM 2013",
"volume": "",
"issue": "",
"pages": "39--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Pyysalo, F. Ginter, H. Moen, T. Salakoski, and S. Ananiadou. 2013. Distributional semantics re- sources for biomedical text processing. In Proceed- ings of LBM 2013, pages 39-44.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Text chunking using transformation-based learning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lance",
"suffix": ""
},
{
"first": "Mitchell P",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1999,
"venue": "Natural language processing using very large corpora",
"volume": "",
"issue": "",
"pages": "157--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lance A Ramshaw and Mitchell P Marcus. 1999. Text chunking using transformation-based learning. In Natural language processing using very large cor- pora, pages 157-176. Springer.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Japanese and korean voice search",
"authors": [
{
"first": "M",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Nakajima",
"suffix": ""
}
],
"year": 2012,
"venue": "2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5149--5152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Schuster and K. Nakajima. 2012. Japanese and ko- rean voice search. In 2012 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149-5152.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Biosses: a semantic sentence similarity estimation system for the biomedical domain",
"authors": [
{
"first": "Gizem",
"middle": [],
"last": "Soganc\u0131oglu",
"suffix": ""
},
{
"first": "Hakime",
"middle": [],
"last": "\u00d6zt\u00fcrk",
"suffix": ""
},
{
"first": "Arzucan",
"middle": [],
"last": "\u00d6zg\u00fcr",
"suffix": ""
}
],
"year": 2017,
"venue": "Bioinformatics",
"volume": "33",
"issue": "14",
"pages": "49--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gizem Soganc\u0131oglu, Hakime \u00d6zt\u00fcrk, and Arzucan \u00d6zg\u00fcr. 2017. Biosses: a semantic sentence simi- larity estimation system for the biomedical domain. Bioinformatics, 33(14):i49-i58.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Identify susceptible locations in medical records via adversarial attacks on deep predictive models",
"authors": [
{
"first": "Mengying",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Fengyi",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Jinfeng",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiayu",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '18",
"volume": "",
"issue": "",
"pages": "793--801",
"other_ids": {
"DOI": [
"10.1145/3219819.3219909"
]
},
"num": null,
"urls": [],
"raw_text": "Mengying Sun, Fengyi Tang, Jinfeng Yi, Fei Wang, and Jiayu Zhou. 2018. Identify susceptible loca- tions in medical records via adversarial attacks on deep predictive models. In Proceedings of the 24th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, KDD '18, page 793-801, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Intriguing properties of neural networks",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Discovering drug-drug interactions: a text-mining and reasoning approach based on properties of drug metabolism",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Tari",
"suffix": ""
},
{
"first": "Saadat",
"middle": [],
"last": "Anwar",
"suffix": ""
},
{
"first": "Shanshan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Chitta",
"middle": [],
"last": "Baral",
"suffix": ""
}
],
"year": 2010,
"venue": "Bioinformatics",
"volume": "26",
"issue": "18",
"pages": "547--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Tari, Saadat Anwar, Shanshan Liang, James Cai, and Chitta Baral. 2010. Discovering drug-drug in- teractions: a text-mining and reasoning approach based on properties of drug metabolism. Bioinfor- matics, 26(18):i547-i553.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Improving chemical named entity recognition in patents with contextualized word embeddings",
"authors": [
{
"first": "Zenan",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Saber",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Camilo",
"middle": [],
"last": "Akhondi",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Druckenbrodt",
"suffix": ""
},
{
"first": "Michelle",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Gregory",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Verspoor",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "328--338",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5035"
]
},
"num": null,
"urls": [],
"raw_text": "Zenan Zhai, Dat Quoc Nguyen, Saber Akhondi, Camilo Thorne, Christian Druckenbrodt, Trevor Cohn, Michelle Gregory, and Karin Verspoor. 2019. Improving chemical named entity recognition in patents with contextualized word embeddings. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 328-338, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Adversarial attacks on deep learning models in natural language processing: A survey",
"authors": [
{
"first": "Wei",
"middle": [
"Emma"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Quan",
"suffix": ""
},
{
"first": "Ahoud",
"middle": [],
"last": "Sheng",
"suffix": ""
},
{
"first": "Chenliang",
"middle": [],
"last": "Alhazmi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Emma Zhang, Quan Z. Sheng, Ahoud Alhazmi, and Chenliang Li. 2019. Adversarial attacks on deep learning models in natural language processing: A survey.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Normalized confusion matrices for test results with (a) original (O), (b) keyboard (K), (c) swap (S) and (d) synonym (S) BC5CDR-Disease and Chemical datasets on average.",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"html": null,
"text": "Examples of sentences of the stress tests.",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"text": "Details of the datasets used. The last three columns present the percentage of tokens modified for each of the adversarial datasets. The slash separates the values belonging to the training and the test set.BioBERT.937 .745 .635 .770 .863 .407 .473 .366 .919 .585 .675 .678 .887 .483 .628 .683 \u00b1.004 \u00b1.006 \u00b1.008 \u00b1.011 \u00b1.004 \u00b1.008 \u00b1.010 \u00b1.007 \u00b1.004 \u00b1.005 \u00b1.007 \u00b1.009 \u00b1.004 \u00b1.007 \u00b1.011 \u00b1.006BlueBERT.901 .583 .708 .739 .838 .368 .441 .362 .820 .472 .570 .607 .773 .332 .438 .615 \u00b1.003 \u00b1.005 \u00b1.008 \u00b1.010 \u00b1.004 \u00b1.007 \u00b1.011 \u00b1.007 \u00b1.003 \u00b1.004 \u00b1.009 \u00b1.010 \u00b1.003 \u00b1.006 \u00b1.009 \u00b1.006BERT.887 .563 .684 .738 .816 .356 .431 .336 .808 .443 .509 .598 .771 .305 .433 .583 \u00b1.004 \u00b1.007 \u00b1.010 \u00b1.015 \u00b1.006 \u00b1.009 \u00b1.013 \u00b1.008 \u00b1.004 \u00b1.006 \u00b1.008 \u00b1.013 \u00b1.005 \u00b1.008 \u00b1.014 \u00b1.007BioELMo.923 .838 .726 .757 .845 .656 .482 .408 .915 .770 .634 .668 .869 .711 .543 .677 \u00b1.001 \u00b1.003 \u00b1.010 \u00b1.032 \u00b1.002 \u00b1.018 \u00b1.025 \u00b1.013 \u00b1.001 \u00b1.003 \u00b1.004 \u00b1.004 \u00b1.005 \u00b1.017 \u00b1.026 \u00b1.012ChemPatent ELMo.910 .822 .745 .757 .824 .637 .508 .380 .898 .766 .662 .642 .863 .693 .586 .655",
"num": null,
"content": "<table><tr><td>Model</td><td>BC5CDR-Chemical O K W S</td><td>O</td><td>BC5CDR-Disease K W</td><td>S</td><td>O</td><td>BC4CHEMD K W</td><td>S</td><td>O</td><td>NCBI-Disease K W</td><td>S</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "Adversarial training results in terms of F1-score for each model and dataset. The training column shows the O set merged with K, W, or S. The test set is shown in parentheses for each scenario.",
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}