ACL-OCL / Base_JSON /prefixS /json /smm4h /2020.smm4h-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:58.007417Z"
},
"title": "Automatic Detecting for Health-related Twitter Data with BioBERT",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Bai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yunnan University",
"location": {
"settlement": "Yunnan",
"country": "P.R. China"
}
},
"email": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yunnan University",
"location": {
"settlement": "Yunnan",
"country": "P.R. China"
}
},
"email": "zhouxb@ynu.edu.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Social media used for health applications usually contains a large amount of data posted by users, which brings various challenges to NLP, such as spoken language, spelling errors, novel/creative phrases, etc. In this paper, we describe our system submitted to SMM4H 2020: Social Media Mining for Health Applications Shared Task which consists of five sub-tasks(Ari Z. Klein and Gonzalez-Hernandez., 2020). We participate in subtask 1, subtask 2-English, and subtask 5. Our final submitted approach is an ensemble of various fine-tuned transformer-based models. We illustrate that these approaches perform well in imbalanced datasets (For example, the class ratio is 1:10 in subtask 2), but our model performance is not good in extremely imbalanced datasets (For example, the class ratio is 1:400 in subtask 1). Finally, in subtask 1, our result is lower than the average score, in subtask 2-English, our result is higher than the average score, and in subtask 5, our result achieves the highest score.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Social media used for health applications usually contains a large amount of data posted by users, which brings various challenges to NLP, such as spoken language, spelling errors, novel/creative phrases, etc. In this paper, we describe our system submitted to SMM4H 2020: Social Media Mining for Health Applications Shared Task which consists of five sub-tasks(Ari Z. Klein and Gonzalez-Hernandez., 2020). We participate in subtask 1, subtask 2-English, and subtask 5. Our final submitted approach is an ensemble of various fine-tuned transformer-based models. We illustrate that these approaches perform well in imbalanced datasets (For example, the class ratio is 1:10 in subtask 2), but our model performance is not good in extremely imbalanced datasets (For example, the class ratio is 1:400 in subtask 1). Finally, in subtask 1, our result is lower than the average score, in subtask 2-English, our result is higher than the average score, and in subtask 5, our result achieves the highest score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "According to the United States Centers for Disease Control and Prevention (CDC), drugs administered for alleviating common sufferings are the fourth biggest cause of death and birth defects are the leading cause of infant mortality. These are the most important medical problems for human society. (Giacomini et al., 2007) Twitter, a popular micro-blogging service, has received much attention recently. It is an online network used by millions of people around the world to stay connected to their friends, family members, and coworkers through their computers and mobile telephones (Barbosa and Feng, 2010) . On average, one in a thousand messages from public Twitter data is health-related. These health-related Twitter posts can help us analyze various human health-related phenomena. For example, there are limited methods for studying birth defects in infants, and this knowledge has been challenging (Klein et al., 2018 )(Klein et al., 2019 . This situation provides a challenging opportunity due to the increasing number of related tweets on Twitter.",
"cite_spans": [
{
"start": 298,
"end": 322,
"text": "(Giacomini et al., 2007)",
"ref_id": "BIBREF3"
},
{
"start": 584,
"end": 608,
"text": "(Barbosa and Feng, 2010)",
"ref_id": "BIBREF1"
},
{
"start": 907,
"end": 926,
"text": "(Klein et al., 2018",
"ref_id": "BIBREF5"
},
{
"start": 927,
"end": 947,
"text": ")(Klein et al., 2019",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With this motivation, five shared tasks are conducted as part of the Social Media Mining for Health Applications (SMM4H) Workshop 2020 hosted by the University of Pennsylvania Health Language Processing (HLP) Lab. Our team participated in subtasks 1, 2-English, and 5 of the workshop. Some samples from the training set are given in Table 1 , and these tasks are:",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 340,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Sub-task 1: Automatic classification of tweets that mention medications",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We take subtask-1 as a binary classification task. The model is required to determine whether a medication or dietary supplement is mentioned in a tweet, and predict a label L, where L \u2208 {mention a medication or dietary supplement -1, no mention -0}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Sub-task 2-English: Automatic classification of English tweets that report adverse effects This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We take subtask-2-English as a binary classification task. The model is required to determine whether an adverse effect of medication is reported in a tweet, and predict a label L, where L \u2208 { report adverse effects of medication -1, no report -0}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Sub-task 5: Automatic classification of tweets reporting a birth defect pregnancy outcome",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We take subtask-5 as a ternary classification task. The model is required to determine whether the user has a child and indicate that the child has the birth defect mentioned in a tweet, and predict a label L, where L \u2208 { refer to the users child and indicate that he/she has the birth defect mentioned in the tweet -1, ambiguous about whether someone is the users child and/or has the birth defect mentioned in the tweet -2, merely mention birth defects -3}. Girl born without an ear can finally hear properly for the first time after surgeons rebuild it 3 Table 1 : Samples from the training set of three subtasks",
"cite_spans": [],
"ref_spans": [
{
"start": 558,
"end": 565,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We try a variety of approaches on these tasks, including classical machine learning methods, CNN models, RNN models, and transformer-based models. We find that the transformer-based neural models consistently outperform other methods. The framework of our implementation is shown in Figure 1 . Firstly, we combine the official training set and the validation set to get the new data set, which is split into the new training set and the validation set by using the stratified 5-fold cross-validation 1 . Secondly, the test set is predicted by fine tuning the model. Thirdly, we create pseudo-label to combine training set and input these data into model training and prediction. Finally, we get the final result by hard voting. ",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 291,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Normally, data in the health field is difficult to obtain. Fortunately, the existence of social media such as Twitter has alleviated this situation. However, it has caused problems that the data is difficult to handle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Because diseases and other health problems belong to a small number of people. This phenomenon has caused data imbalance in social media. These problems bring great challenges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Much previous work has focused on tracking and monitoring diseases on social media. (Yin et al., 2015) has developed a scalable system by training classifiers on a dataset of 34 health topics. It created a general health classifier using standard SVM. Recently, the use of dense word vectors or embeddings is becoming popular, while previous models (e.g. Word2Vec (Pennington et al., 2014) , GloVe (Pennington et al., 2014) , FastText (Joulin et al., 2016) ) focus on learning context-independent word representations, recent works have focused on learning context-dependent word representations. For instance, BERT (Devlin et al., 2018 ) is a contextualized word representation model, which is based on a masked language model and pre-trained by using bidirectional transformers Weissenbacher et al., 2019b) . BioBERT is a domain-specific language representation model pre-trained on a large biomedical corpus (Lee et al., 2020) . In this paper, we use the BioBERT in our experiments and use it for the different text classification tasks of Social Media Mining for Health Workshop. In the following, we describe our dataset and the methods used for different tasks.",
"cite_spans": [
{
"start": 84,
"end": 102,
"text": "(Yin et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 364,
"end": 389,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 398,
"end": 423,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 435,
"end": 456,
"text": "(Joulin et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 616,
"end": 636,
"text": "(Devlin et al., 2018",
"ref_id": "BIBREF2"
},
{
"start": 780,
"end": 808,
"text": "Weissenbacher et al., 2019b)",
"ref_id": "BIBREF14"
},
{
"start": 911,
"end": 929,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The data sets of these tasks are all from social media. These data sets contains rare health-related events such as pregnant women groups, drug effects, birth defects, etc (Sarker et al., 2017; Weissenbacher et al., 2019a; O'Connor et al., 2020 ).",
"cite_spans": [
{
"start": 172,
"end": 193,
"text": "(Sarker et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 194,
"end": 222,
"text": "Weissenbacher et al., 2019a;",
"ref_id": "BIBREF13"
},
{
"start": 223,
"end": 244,
"text": "O'Connor et al., 2020",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 3.1 Dataset",
"sec_num": "3"
},
{
"text": "\u2022 For shared task 1. In the training set, 55,419 tweets are included, with 146 tweets mentioning medications (1) and 55,273 tweets not mentioning (0). In the validation set, 13853 tweets are included, 35 labeled 1, and 13818 tweets labeled 0. This is an extremely imbalanced data set, so the evaluation indicator for this task is F1-score for the positive class (i.e., tweets that mention medications).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 3.1 Dataset",
"sec_num": "3"
},
{
"text": "\u2022 For shared task 2. In the training set, 20,544 tweets are included, 1903 tweets that report adverse effects of medications (1), and 18461 tweets that do not report (0). In the validation set, 5134 tweets are included, 474 tweets that report adverse effects of medications (1), and 4660 tweets that do not report (0). This is an imbalanced data set, so the evaluation indicator for this task is F1-score for the positive class (i.e., tweets that report adverse effects of medications).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 3.1 Dataset",
"sec_num": "3"
},
{
"text": "\u2022 For shared task 5. In the training set, there are 14717 tweets, 773 tweets for 'defect' (1), 834 tweets for 'possible defect' class (2), and 13110 tweets for 'non-defect' (2). In the validation set, there are 3680 tweets, 193 tweets for 'defect' (1), 207 tweets for 'possible defect' (2), and 3280 tweets for 'non-defect' (3). This is an imbalanced data set, so the evaluation indicator for this task is micro-averaged F1-score for the \"defect\" and \"possible defect\" classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 3.1 Dataset",
"sec_num": "3"
},
{
"text": "There are some limitations in applying NLP directly to biomedical text mining. With the recent word representation model (such as word2vec), Elmo and BERT are trained and tested on data sets containing common domain text. However, these models do not perform well on biomedical text data sets. So, we choose the BioBERT 2 as our model for these tasks we participated in. For classification tasks, the output of BioBERT(pooler output) is obtained by its last layer hidden state of the first token of the sequence (CLS token) further processed by a linear layer and a tank activation function. But the pooler output is usually not a good summary of the semantic content of the input. So we try the following model architecture to relieve this problem. Our model architecture is shown in Figure 2 . For Figure 2 (a) , we can regard the model as two parts. The first part is to get the output of BioBERT(P O). The second part is the BiGRU module with input P O. For Figure 2 (b), we concatenate P O and H 0 of the last two hidden layers into the classifier after obtaining P O. For Figure 66 2(c), we concatenate H 0 of the last two hidden layers into the classifier. For Figure 2 (d), we concatenate P O and H 0 of the last three hidden layers into the classifier after obtaining P O.",
"cite_spans": [],
"ref_spans": [
{
"start": 785,
"end": 793,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 800,
"end": 812,
"text": "Figure 2 (a)",
"ref_id": "FIGREF1"
},
{
"start": 962,
"end": 970,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1078,
"end": 1088,
"text": "Figure 66",
"ref_id": null
},
{
"start": 1169,
"end": 1177,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "In order to make our model more robust, we use pseudo labels. First, we input the training set and the validation set into the modelA for training, and predict a resultA in the test set. Secondly, we add 10% of resultA to the training set to obtain a training set with pseudo-label. Again, we input the training with pseudo-label and validation set into the modelA for training, and predict a final result in the test set. The process of pseudo label is shown in Figure 3 . BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining) is a domain-specific language representation model pre-trained on large-scale biomedical corpora. It greatly outperforms BERT and previous advanced models in many biomedical text mining tasks. For all tasks, we use the BioBERT-Base-v1.1-PubMed model and RAdam (Liu et al., 2019) as BioBERT optimizer. In order to increase the difference in model fusion, we preprocess the data input by some models. the data set such as: removing URL, standardizing case, and standardizing @ username. The purpose of preprocessing data is to increase the difference of results during model ensemble. From table 2, we can see the Hyperparameters of these models. Firstly, we combine the official training set and the verification set to get the new data set, which is split into the new training set and the verification set by using the stratified 5-fold cross-validation. Secondly, we input data into one to six models for training with the training set and predict six results with the test set. We combine these six results(result1 \u2212 6) to get result \u2212 A by hard voting. Thirdly, in order to make pseudo label, we take 10% of the data set from result \u2212 A which is combined into the training set. We input the data set with pseudo-label into the seven-eight model to train and predict, we get these result (result7 \u2212 8). Fourthly, we input the data into the ninth model training and prediction, we get the result(result9). Finally, we combine result1 \u2212 9 by hard voting to get the final result. Table 3 presents the performance scores for three subtasks on the test data, our results are based on hard voting of the 9 models in Table 2 . For all tasks, we only use the official training set and validation set and do not use any external data. As can be seen from the table, in extremely imbalanced data, our method Table 3 : The results of our method on the test set for three subtasks has no advantage. But in imbalanced data, our method has achieved good results.",
"cite_spans": [
{
"start": 827,
"end": 845,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 463,
"end": 471,
"text": "Figure 3",
"ref_id": null
},
{
"start": 2047,
"end": 2054,
"text": "Table 3",
"ref_id": null
},
{
"start": 2180,
"end": 2187,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 2368,
"end": 2375,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pseudo Label",
"sec_num": "3.3"
},
{
"text": "In this work, our method is an ensemble of various fine-tuned transformer-based models on these tasks. We obtain decent results for these tasks organized as a shared task in Social Media Mining for Health Workshop -2020. Our biggest regret in this work is that in extremely imbalanced data sets, we have not done too much processing on the data sets and can't achieve promising results. In the future, our work will focus on solving the problem of extremely imbalanced data. The code is available online. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://scikit-learn.org/stable/modules/generated/sklearn.model selection.StratifiedKFold.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/vthost/biobert-pretrained-pytorch/releases/tag/v1.1-pubmed",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Overview of the fifth social media mining for health applications (#SMM4H) shared tasks at COLING 2020",
"authors": [],
"year": 2020,
"venue": "proceedings of the fifth social media mining for health applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Flores Arjun Magge Zulfat Miftahutdinov Anne-Lyse Minard Karen O'Connor Abeed Sarker Elena Tutubali- na Davy Weissenbacher Ari Z. Klein, Ilseyar Alimova and Graciela Gonzalez-Hernandez. 2020. Overview of the fifth social media mining for health applications (#SMM4H) shared tasks at COLING 2020. in proceedings of the fifth social media mining for health applications (#SMM4H) Workshop & Shared Task.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Robust sentiment detection on Twitter from biased and noisy data",
"authors": [
{
"first": "Luciano",
"middle": [],
"last": "Barbosa",
"suffix": ""
},
{
"first": "Junlan",
"middle": [],
"last": "Feng",
"suffix": ""
}
],
"year": 2010,
"venue": "Coling 2010: Posters",
"volume": "",
"issue": "",
"pages": "36--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luciano Barbosa and Junlan Feng. 2010. Robust sentiment detection on Twitter from biased and noisy data. In Coling 2010: Posters, pages 36-44.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "When good drugs go bad",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kathleen",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"M"
],
"last": "Giacomini",
"suffix": ""
},
{
"first": "Dan",
"middle": [
"M"
],
"last": "Krauss",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Roden",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eichelbaum",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Michael R Hayden",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2007,
"venue": "Nature",
"volume": "446",
"issue": "7139",
"pages": "975--977",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen M Giacomini, Ronald M Krauss, Dan M Roden, Michel Eichelbaum, Michael R Hayden, and Yusuke Nakamura. 2007. When good drugs go bad. Nature, 446(7139):975-977.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.01759"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Social media mining for birth defects research: a rule-based, bootstrapping approach to collecting data for rare healthrelated events on Twitter",
"authors": [
{
"first": "Abeed",
"middle": [],
"last": "Ari Z Klein",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of biomedical informatics",
"volume": "87",
"issue": "",
"pages": "68--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Z Klein, Abeed Sarker, Haitao Cai, Davy Weissenbacher, and Graciela Gonzalez-Hernandez. 2018. Social media mining for birth defects research: a rule-based, bootstrapping approach to collecting data for rare health- related events on Twitter. Journal of biomedical informatics, 87:68-78.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards scaling Twitter for digital epidemiology of birth defects",
"authors": [
{
"first": "Abeed",
"middle": [],
"last": "Ari Z Klein",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2019,
"venue": "NPJ digital medicine",
"volume": "2",
"issue": "1",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Z Klein, Abeed Sarker, Davy Weissenbacher, and Graciela Gonzalez-Hernandez. 2019. Towards scaling Twitter for digital epidemiology of birth defects. NPJ digital medicine, 2(1):1-9.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "On the variance of the adaptive learning rate and beyond",
"authors": [
{
"first": "Liyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Haoming",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.03265"
]
},
"num": null,
"urls": [],
"raw_text": "Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2019. On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Promoting reproducible research for characterizing nonmedical use of medications through data annotation: Description of a Twitter corpus and guidelines",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Karen",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Jeanmarie",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Graciela",
"middle": [
"Gonzalez"
],
"last": "Perrone",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hernandez",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of medical Internet research",
"volume": "22",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen O'Connor, Abeed Sarker, Jeanmarie Perrone, and Graciela Gonzalez Hernandez. 2020. Promoting repro- ducible research for characterizing nonmedical use of medications through data annotation: Description of a Twitter corpus and guidelines. Journal of medical Internet research, 22(2):e15861.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word represen- tation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Discovering cohorts of pregnant women from social media for safety surveillance and analysis",
"authors": [
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Pramod",
"middle": [],
"last": "Chandrashekar",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Magge",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of medical Internet research",
"volume": "19",
"issue": "10",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abeed Sarker, Pramod Chandrashekar, Arjun Magge, Haitao Cai, Ari Klein, and Graciela Gonzalez. 2017. Dis- covering cohorts of pregnant women from social media for safety surveillance and analysis. Journal of medical Internet research, 19(10):e361.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Overview of the third social media mining for health (SMM4H) shared tasks at EMNLP",
"authors": [
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "13--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davy Weissenbacher, Abeed Sarker, Michael Paul, and Graciela Gonzalez. 2018. Overview of the third social media mining for health (SMM4H) shared tasks at EMNLP 2018. In Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared Task, pages 13-16.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deep neural networks ensemble for detecting medication mentions in tweets",
"authors": [
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Oconnor",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Magge",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of the American Medical Informatics Association",
"volume": "26",
"issue": "12",
"pages": "1618--1626",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davy Weissenbacher, Abeed Sarker, Ari Klein, Karen OConnor, Arjun Magge, and Graciela Gonzalez-Hernandez. 2019a. Deep neural networks ensemble for detecting medication mentions in tweets. Journal of the American Medical Informatics Association, 26(12):1618-1626.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Overview of the fourth social media mining for health (SMM4H) shared tasks at ACL 2019",
"authors": [
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Magge",
"suffix": ""
},
{
"first": "Ashlynn",
"middle": [],
"last": "Daughton",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Oconnor",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "21--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davy Weissenbacher, Abeed Sarker, Arjun Magge, Ashlynn Daughton, Karen OConnor, Michael Paul, and Gra- ciela Gonzalez. 2019b. Overview of the fourth social media mining for health (SMM4H) shared tasks at ACL 2019. In Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task, pages 21-30.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A scalable framework to detect personal health mentions on Twitter",
"authors": [
{
"first": "Zhijun",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Fabbri",
"suffix": ""
},
{
"first": "Bradley",
"middle": [],
"last": "Trent Rosenbloom",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Malin",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of medical Internet research",
"volume": "17",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhijun Yin, Daniel Fabbri, S Trent Rosenbloom, and Bradley Malin. 2015. A scalable framework to detect personal health mentions on Twitter. Journal of medical Internet research, 17(6):e138.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Our framework"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "The models we used (H is the hidden layer of BioBERT, P O is the pooler output, H 0 is hidden-state of the first token of the sequence(CLS token) at the output of the hidden layer of the model.)"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 3: The process of pseudo label"
},
"TABREF1": {
"html": null,
"text": "",
"content": "<table><tr><td>shows the hyper-parameters used by our different models. Preprocessing data is simple data cleaning of</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF2": {
"html": null,
"text": "Hyperparameters of these models in our experiments for three subtasks.",
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}