ACL-OCL / Base_JSON /prefixS /json /smm4h /2020.smm4h-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:57.499434Z"
},
"title": "Autobots Ensemble: Identifying and Extracting Adverse Drug Reaction from Tweets using Transformer Based Pipelines",
"authors": [
{
"first": "Sougata",
"middle": [],
"last": "Saha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Engineering University at Buffalo",
"location": {
"postCode": "14260",
"settlement": "Amherst",
"region": "NY"
}
},
"email": "sougatas@buffalo.edu"
},
{
"first": "Souvik",
"middle": [],
"last": "Das",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Engineering University at Buffalo",
"location": {
"postCode": "14260",
"settlement": "Amherst",
"region": "NY"
}
},
"email": "souvikda@buffalo.edu"
},
{
"first": "Prashi",
"middle": [],
"last": "Khurana",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Engineering University at Buffalo",
"location": {
"postCode": "14260",
"settlement": "Amherst",
"region": "NY"
}
},
"email": "prashikh@buffalo.edu"
},
{
"first": "Rohini",
"middle": [
"K"
],
"last": "Srihari",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Engineering University at Buffalo",
"location": {
"postCode": "14260",
"settlement": "Amherst",
"region": "NY"
}
},
"email": "rohini@buffalo.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper details a system designed for Social Media Mining for Health Applications (SMM4H) Shared Task 2020. We specifically describe the systems designed to solve task 2: Automatic classification of multilingual tweets that report adverse effects, and task 3: Automatic extraction and normalization of adverse effects in English tweets. Fine tuning RoBERTa large for classifying English tweets enables us to achieve a F1 score of 56%, which is an increase of +10% compared to the average F1 score for all the submissions. Using BERT based NER and question answering, we are able to achieve a F1 score of 57.6% for extracting adverse reaction mentions from tweets, which is an increase of +1.2% compared to the average F1 score for all the submissions.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper details a system designed for Social Media Mining for Health Applications (SMM4H) Shared Task 2020. We specifically describe the systems designed to solve task 2: Automatic classification of multilingual tweets that report adverse effects, and task 3: Automatic extraction and normalization of adverse effects in English tweets. Fine tuning RoBERTa large for classifying English tweets enables us to achieve a F1 score of 56%, which is an increase of +10% compared to the average F1 score for all the submissions. Using BERT based NER and question answering, we are able to achieve a F1 score of 57.6% for extracting adverse reaction mentions from tweets, which is an increase of +1.2% compared to the average F1 score for all the submissions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the world adapting to the new normal, social media is proving to be a key resource for humans. With more people sharing their life experiences in social media platforms, pharmaceutical firms can benefit by leveraging the power of deep learning and natural language processing for digital pharmacovigilance. In this paper, we showcase our systems for task 2 & 3 of the Social Media Mining for Health Applications Shared Task 2020 (Klein et al., 2020) . Inspired by the current research using transformer architectures, and the results that KFU NLP Team (Miftahutdinov et al., 2019) had achieved at SMM4H 2019 using BERT (Devlin et al., 2018) , we experimented with a suite of different transformer architectures. Transformers (Vaswani et al., 2017) are solely based on attention mechanisms, which dispense recurrence and convolutions entirely, enabling parallel processing and state of the art models. Liu et al. (2019) in their research uncovered that BERT was significantly under trained, and proposed RoBERTa: A Robustly Optimized BERT Pretraining Approach. We fine tuned RoBERTa on the English training tweets to classify a tweet as containing adverse reaction mention or not. For the Russian and French tweets classification tasks, we fine tuned RuBERT (Kuratov and Arkhipov, 2019) and CamemBERT (Martin et al., 2019) respectively.",
"cite_spans": [
{
"start": 434,
"end": 454,
"text": "(Klein et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 557,
"end": 585,
"text": "(Miftahutdinov et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 624,
"end": 645,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 730,
"end": 752,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 906,
"end": 923,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 1262,
"end": 1290,
"text": "(Kuratov and Arkhipov, 2019)",
"ref_id": "BIBREF7"
},
{
"start": 1305,
"end": 1326,
"text": "(Martin et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For extracting the adverse reaction mentions from a tweet, we devised an end to end pipeline by posing the task as a named entity recognition (NER) task as well as a question answering task. We fine tuned BERT, SciBERT (Beltagy et al., 2019) and BioBERT (Lee et al., 2019) and created an ensemble NER, and fine tuned BioBERT QA for the question answering module. Post adverse mention extraction, we normalised the mention to the MedDRA code using pre-trained fastText (Bojanowski et al., 2016) embeddings and cosine similarity. The paper is organized as follows. In section 2 we describe the problem statements. Section 3 describes the architectures and methods that were implemented for each of the tasks, and showcase our results in section 4. We discuss some of the challenging aspects of the problems in section 5, and finally conclude in section 6.",
"cite_spans": [
{
"start": 219,
"end": 241,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 254,
"end": 272,
"text": "(Lee et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 468,
"end": 493,
"text": "(Bojanowski et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Training Set Test Set % Positive % Negative Total examples Task 2-English 9.25% 90.75% 25,672 5,000 Task 2-Russian 8.75% 91.25% 7,612 1,903 Task 2-French 1.61% 98.39% 2,426 607 Task 3-Resolution(NER + Norm) 51.01% 48.61% 2,376 1,000 Table 1 : Data distribution for each task.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 221,
"text": "Task 2-English 9.25% 90.75% 25,672 5,000 Task 2-Russian 8.75% 91.25% 7,612 1,903 Task 2-French 1.61% 98.39% 2,426 607 Task 3-Resolution(NER + Norm)",
"ref_id": "TABREF1"
},
{
"start": 248,
"end": 255,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task",
"sec_num": null
},
{
"text": "2 Task and Data Description 2.1 Task 2: Automatic classification of multilingual tweets that report adverse effects This task involved developing a system which is capable of distinguishing tweets that report an adverse reaction to medication from tweets that do not. This task was subdivided into 3 tasks by language of tweet: English, Russian and French. Table 1 shows the distribution of training and testing data samples, and the split of positive and negative examples in the training data set.",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 364,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task",
"sec_num": null
},
{
"text": "This task consisted of two parts. The first being extraction of the specific adverse reaction of a drug from English tweets. The second being mapping the extracted adverse reaction to a standard concept ID in the MedDRA vocabulary. Table 1 shows the distribution of training and testing data samples, and the split of positive and negative examples in the training data set.",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task 3: Automatic extraction and normalization of adverse effects in English tweets",
"sec_num": "2.2"
},
{
"text": "3 Methods",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3: Automatic extraction and normalization of adverse effects in English tweets",
"sec_num": "2.2"
},
{
"text": "Twitter data is almost always noisy, hence we cleansed and pre-processed the tweets before training the classifier. Using Ekphrasis (Baziotis et al., 2017) , regex and NLTK, we converted tweets to lowercase, normalized elongated characters, repeated characters and hashtags, unpacked contractions, removed URL, mentions, smileys and emojis, and removed special tweet tokens like 'rt'. We experimented with different transformer models like BERT base uncased, SciBERT with scivocab, BioBERT base v1.1 and RoBERTa large, and achieved best validation results with RoBERTa large. We fine tuned the RoBERTa large model using the pre-processed English tweets. We sum pooled the last 6 layers of RoBERTa, and performed classification by passing the pooled representation through a linear layer. We trained the model for 6 epochs with a learning rate of 2e-5. Table 2 demonstrates the model performance on the validation set, and Table 3 demonstrates the model performance on the test set.",
"cite_spans": [
{
"start": 132,
"end": 155,
"text": "(Baziotis et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 852,
"end": 859,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 922,
"end": 929,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Task 2-Automatic classification of multilingual tweets that report adverse effects: English",
"sec_num": "3.1"
},
{
"text": "For the Russian and French classification, we pre-processed the tweets by removing special tweet tokens like 'rt', URL, mentions, smileys and emojis. We experimented with different transformer models like multilingual BERT, RuBERT, CamemBERT and FlauBERT (Le et al., 2019) , and got best validation results using RuBERT for Russian tweets, and CamemBERT for French tweets. For both the models we had sum pooled the last 4 layers of the transformer, and trained for 6 epochs with a learning rate of 2e-5. For the French tweets we achieved a validation F1 of 0.22, but unfortunately could not classify any tweets correctly in the test data set. Table 2 demonstrates the performance of the Russian tweet classifier on the validation set, and Table 3 demonstrates the model's performance on the test set.",
"cite_spans": [
{
"start": 255,
"end": 272,
"text": "(Le et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 643,
"end": 650,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 739,
"end": 746,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Task 2-Automatic classification of multilingual tweets that report adverse effects: Russian and French",
"sec_num": "3.2"
},
{
"text": "We devised a three step extraction pipeline for this task, which is illustrated in Figure 1b . We detail the three steps below: \u2022 Classifying tweets as containing ADR mentions: We cleanse the tweets using the same preprocessing pipeline as mentioned in section 3.1, and use the RoBERTa classifier trained from section 3.1 to classify tweets as containing adverse reaction mentions.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 92,
"text": "Figure 1b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Extraction",
"sec_num": "3.3"
},
{
"text": "\u2022 Transformer based ensemble named entity recognition (NER) tagger: We fine tuned SciBERT, BioBERT and BERT base to create an ensemble of NER taggers for extracting the ADR mention extract. Each tweet was tagged using the 'BIO' scheme, where 'B' denoted the start token of an extract, 'O' denoted tokens outside the extract, and 'I' denoted tokens inside the extract. We fine tuned each of the models in the ensemble for 5 epochs with a learning rate of 3e-5. Only the tweets that are classified as containing adverse drug reaction mentions from the previous stage are passed through the ensemble NER tagger to extract the adverse mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Extraction",
"sec_num": "3.3"
},
{
"text": "\u2022 Transformer based question answering system: The tweets that were classified as containing adverse mentions, but did not yield any extracts from the previous ADR mention extraction NER stage were passed through the question answering stage for adverse reaction extraction. This stage is sub divided into the following two steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Extraction",
"sec_num": "3.3"
},
{
"text": "-NER tagger for drug detection: We trained a BERT NER tagger for detecting drug names in a tweet. We tagged each tweet using the standard 'BIO' scheme to distinguish tokens containing drug names from other tokens, and trained the classifier for 5 epochs with a learning rate of 3e-5. We passed tweets through this tagger to extract the drug name and passed the drug name and tweet to the below step. -BioBERT question answering: Given the tweet and the drug name as context, we fine tuned",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Extraction",
"sec_num": "3.3"
},
{
"text": "BioBERT question answering on our training dataset to extract the adverse reaction to the drug. For example, after identifying the drug name (for example Tylenol) through the drug NER tagger, we constructed the question \"What is the adverse effect of Tylenol?\". Given the constructed question and the tweet as context, we fine tuned the BioBERT question answering model for 3 epochs with a learning rate of 5e-6, to extract the adverse reaction mention from the tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Extraction",
"sec_num": "3.3"
},
{
"text": "The essence of this task was to assign the most probable MedDRA code to the extracted adverse reaction mention from a tweet. The distribution of the number of training examples per MedDRA code followed a long tail distribution, which led to majority of the MedDRA codes having insufficient training examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Normalization",
"sec_num": "3.4"
},
{
"text": "We overcame this problem by enriching the training data set with additional CADEC (Karimi et al., 2015) and UMLS (Bodenreider, 2004) adverse reaction to MedDRA code mapping pairs. We ensured the number of examples are approximately 50 for each MedDRA code. We leveraged pre-trained fastText word embeddings to map the extracted adverse reactions to the most probable MedDRA code. We denoted each of the 475 MedDRA codes by a 300 dimensional vector, which was computed by mean pooling the fastText word embeddings of all the 50 adverse reaction extracts associated with the MedDRA code. For each adverse reaction mention extract, we mean pooled the 300 dimensional fastText embedding of the extract and the tweet in a heuristically determined proportion of 10:1 and represented it as a fixed 300 dimensional vector. Finally cosine similarity between the 300 dimensional extract vector and all the 300 dimensional vectors for MedDRA codes was performed to determine the closest MedDRA code for the extract. The extract normalization process can be formulated by the following formulas.",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "(Karimi et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 113,
"end": 132,
"text": "(Bodenreider, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Normalization",
"sec_num": "3.4"
},
{
"text": "During training:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Normalization",
"sec_num": "3.4"
},
{
"text": "x extracts = (x extract 1 , x extract 2 , ..., x extract n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Normalization",
"sec_num": "3.4"
},
{
"text": "x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Normalization",
"sec_num": "3.4"
},
{
"text": "extract i = [x 1 , x 2 , ..., x 300 ] T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Normalization",
"sec_num": "3.4"
},
{
"text": "x M edDRA code i = 1/n *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Normalization",
"sec_num": "3.4"
},
{
"text": "x extract i xextracts x extract i = [x 1 ,x 2 , ...,x 300 ] T X M edDRA code = [x M edDRA code 1 , ..., x M edDRA code 475 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Normalization",
"sec_num": "3.4"
},
{
"text": "During validation/testing:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Normalization",
"sec_num": "3.4"
},
{
"text": "x extract embedding = [p 1 , p 2 , ..., p 300 ] T x tweet embedding = [q 1 , q 2 , ..., q 300 ] T x extract embedding contextual = [(10p 1 + q 1 )/2, ..., (10p 300 + q 300 )/2] T closest M edDRA code = argmax x T extract embedding contextual \u2022 X M edDRA code norm(x extract embedding contextual ) * norm(X M edDRA code , dim = 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3-Automatic extraction and normalization of adverse effects in English tweets: Normalization",
"sec_num": "3.4"
},
{
"text": "Strict and relaxed F1 scores are used for evaluating the models. Under strict mode of evaluation, ADR spans are considered correct only if both start and end indices matches with the indices in the gold standard annotations. Under relaxed mode of evaluation, ADR spans are considered correct only if spans in predicted annotations overlapped with the gold standard annotations. In our system, this leads to significant differences between the two F1 scores. Our RoBERTa based English tweet classifier for task 2 outperforms most systems, and achieves a test F1 score of 0.56. With the multi staged adverse reaction extraction pipeline, we are able to achieve a relaxed F1 score of 0.576, which is above the average F1 of all the other submitted systems. Unfortunately, due to highly imbalanced training data, our French tweet classifier is not optimally trained, and fails to correctly identify any French tweets containing adverse reaction mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Below we summarize the performance of our systems on all the different tasks. Table 2 summarizes our system's performance on the validation set. In table 3 we summarize our system's performance on the test set, and also show a comparison between our system's performance, and the average performance of all the systems in each of the tasks. ",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "As represented in Table 1 , although most of the classification tasks had imbalanced training data the transformer models performed well, as they are more resilient to imbalanced classes compared to traditional machine learning models. For the French tweet classifier, the models learning capabilities were seriously hampered by the highly imbalanced data set, and re-sampling techniques did not help. Adverse drug reaction extract normalization was a particularly challenging task. We experimented with several hierarchical recurrent neural network based architectures, transformer architectures and fixed embedding based similarity architectures. We finalised on using embedding based similarity architecture as the other architectures did not boost the score much. In order to understand more about the problem, we looked into the data closely and uncovered the following patterns.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "There are extracts which are very similar in meaning, yet mapped to different MedDRA codes. For example, the extracts 'addiction' and 'addictive' are very similar in meaning, but are mapped to Med-DRA codes 10001125 and 10012336 respectively. To make our normalization algorithm resilient to such differences, we included the embedding of the tweet as context, along with the embedding of the extract while mapping the extract to the MedDRA code. As discussed in section 3.4, we heuristically determined a weight of 10:1 for pooling the extract and tweet embedding to generate more contextual vector representation of the extract. Figure 1a illustrates the problem of overlapping extract embeddings in 2-D using T-SNE. We can see the extracts forming clusters, which makes the MedDRA code mapping problem hard.",
"cite_spans": [],
"ref_spans": [
{
"start": 631,
"end": 640,
"text": "Figure 1a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Extracts with similar meaning mapped to distinct MedDRA codes",
"sec_num": "5.1"
},
{
"text": "Consider the tweets 'addicted to nicotine badly' and '... dante addicted to that nicotine'. In both the tweets 'nicotine' is tagged as the drug, and 'addicted' is the adverse reaction extract. Intuitively, both the tweets should map to the same MedDRA code. However in the training data, the first tweet maps to MedDRA code 10012336, which stands for 'dependence addictive', and the second tweet maps to MedDRA code 10001125, which stands for 'addiction'. These examples hamper the learning capabilities of the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potential training data set issues",
"sec_num": "5.2"
},
{
"text": "In this work, we have experimented with different transformer models for classifying ADR tweets as well as extracting ADR terms. We have leveraged several transformer based pre-trained models like RoBERTa, BioBERT etc. Also, we have devised a multi step pipeline for extracting the ADR terms from a tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We can immediately think of two future improvements, firstly for the non English tasks in Task 2 we can develop a translation model to translate the tweets into English before classifying using the Task 2 English tweet classifier. Secondly, bettering the NER and MedDRA mapping, we want to incorporate a model that will be jointly trained to perform multiple tasks. For example, given a text, the model should be able to extract the ADR extracts as well as classify the tweet as ADR or non-ADR, as well as map it to the correct MedDRA code. Also, we will add a relationship extraction task, where we will identify the relation between the drug and ADR. We hypothesize that such a model should outperform a standard model as it will incorporate features and information sharing across tasks. For example, the NER would make a less false positive classification for non-ADR tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Baziotis",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Pelekis",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Doulkeridis",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "747--754",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Baziotis, Nikos Pelekis, and Christos Doulkeridis. 2017. Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 747-754, Vancouver, Canada, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Scibert: A pretrained language model for scientific text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The unified medical language system (umls): integrating biomedical terminology",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Bodenreider",
"suffix": ""
}
],
"year": 2004,
"venue": "Nucleic acids research",
"volume": "32",
"issue": "",
"pages": "267--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Bodenreider. 2004. The unified medical language system (umls): integrating biomedical terminology. Nucleic acids research, 32(Database issue):D267-D270, Jan. 14681409[pmid].",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Cadec: A corpus of adverse drug event annotations",
"authors": [
{
"first": "Sarvnaz",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "Alejandro",
"middle": [],
"last": "Metke-Jimenez",
"suffix": ""
},
{
"first": "Madonna",
"middle": [],
"last": "Kemp",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of biomedical informatics",
"volume": "55",
"issue": "",
"pages": "73--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73-81.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Overview of the fifth social media mining for health applications (smm4h) shared tasks at coling 2020",
"authors": [
{
"first": "Ari",
"middle": [
"Z"
],
"last": "Klein",
"suffix": ""
},
{
"first": "Ilseyar",
"middle": [],
"last": "Alimova",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Flores",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Magge",
"suffix": ""
},
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Anne-Lyse",
"middle": [],
"last": "Minard",
"suffix": ""
},
{
"first": "Karen",
"middle": [
"O"
],
"last": "Connor",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Z. Klein, Ilseyar Alimova, Ivan Flores, Arjun Magge, Zulfat Miftahutdinov, Anne-Lyse Minard, Karen O'Connor, Abeed Sarker, Elena Tutubalina, Davy Weissenbacher, and Graciela Gonzalez-Hernandez. 2020. Overview of the fifth social media mining for health applications (smm4h) shared tasks at coling 2020. Pro- ceedings of the Fifth Social Media Mining for Health Applications (SMM4H) Workshop & Shared Task.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adaptation of deep bidirectional multilingual transformers for russian language",
"authors": [
{
"first": "Yuri",
"middle": [],
"last": "Kuratov",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Arkhipov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuri Kuratov and Mikhail Arkhipov. 2019. Adaptation of deep bidirectional multilingual transformers for russian language.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Alexandre Allauzen, Beno\u00eet Crabb\u00e9, Laurent Besacier, and Didier Schwab",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Vial",
"suffix": ""
},
{
"first": "Jibril",
"middle": [],
"last": "Frej",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Segonne",
"suffix": ""
},
{
"first": "Maximin",
"middle": [],
"last": "Coavoux",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Lecouteux",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hang Le, Lo\u00efc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Beno\u00eet Crabb\u00e9, Laurent Besacier, and Didier Schwab. 2019. Flaubert: Unsupervised language model pre- training for french.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, Sep.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "\u00c9ric Villemonte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Yoann",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": 2019,
"venue": "Camembert: a tasty french language model",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Ortiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric Villemonte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2019. Camembert: a tasty french language model.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "KFU NLP team at SMM4H 2019 tasks: Want to extract adverse drugs reactions from tweets? BERT to the rescue",
"authors": [
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Ilseyar",
"middle": [],
"last": "Alimova",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "52--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zulfat Miftahutdinov, Ilseyar Alimova, and Elena Tutubalina. 2019. KFU NLP team at SMM4H 2019 tasks: Want to extract adverse drugs reactions from tweets? BERT to the rescue. In Proceedings of the Fourth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 52-57, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "(a) Each point in the scatter plot corresponds to the BERT embedding of an extract which is mapped to a Med-DRA code.(b) ADR extraction process flow.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Pipeline for extracting adverse mentions from tweets and visualizing the adverse mentions extracts in 2-D using T-SNE",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "Results on validation set.",
"html": null,
"content": "<table><tr><td>Task</td></tr></table>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "Results on test set, and comparison against arithmetic mean of best submissions made by other teams.",
"html": null,
"content": "<table/>"
}
}
}
}