ACL-OCL / Base_JSON /prefixS /json /smm4h /2020.smm4h-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:35:12.481446Z"
},
"title": "Overview of the Fifth Social Media Mining for Health Applications (#SMM4H) Shared Tasks at COLING 2020",
"authors": [
{
"first": "Ari",
"middle": [
"Z"
],
"last": "Klein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Ilseyar",
"middle": [],
"last": "Alimova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kazan Federal University",
"location": {
"settlement": "Kazan",
"country": "Russia"
}
},
"email": "alimovailseyar@gmail.com"
},
{
"first": "Ivan",
"middle": [],
"last": "Flores",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Magge",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kazan Federal University",
"location": {
"settlement": "Kazan",
"country": "Russia"
}
},
"email": "zulfatmi@gmail.com"
},
{
"first": "Anne-Lyse",
"middle": [],
"last": "Minard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LLL-CNRS",
"location": {
"settlement": "Orl\u00e9ans",
"country": "France"
}
},
"email": "anne-lyse.minard@univ-orleans.fr"
},
{
"first": "Karen",
"middle": [],
"last": "O'connor",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Emory University Atlanta",
"location": {
"region": "GA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kazan Federal University",
"location": {
"settlement": "Kazan",
"country": "Russia"
}
},
"email": "tutubalinaev@gmail.com"
},
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"region": "PA",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The vast amount of data on social media presents significant opportunities and challenges for utilizing it as a resource for health informatics. The fifth iteration of the Social Media Mining for Health Applications (#SMM4H) shared tasks sought to advance the use of Twitter data (tweets) for pharmacovigilance, toxicovigilance, and epidemiology of birth defects. In addition to reruns of three tasks, #SMM4H 2020 included new tasks for detecting adverse effects of medications in French and Russian tweets, characterizing chatter related to prescription medication abuse, and detecting self reports of birth defect pregnancy outcomes. The five tasks required methods for binary classification, multi-class classification, and named entity recognition (NER). With 29 teams and a total of 130 system submissions, participation in the #SMM4H shared tasks continues to grow.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The vast amount of data on social media presents significant opportunities and challenges for utilizing it as a resource for health informatics. The fifth iteration of the Social Media Mining for Health Applications (#SMM4H) shared tasks sought to advance the use of Twitter data (tweets) for pharmacovigilance, toxicovigilance, and epidemiology of birth defects. In addition to reruns of three tasks, #SMM4H 2020 included new tasks for detecting adverse effects of medications in French and Russian tweets, characterizing chatter related to prescription medication abuse, and detecting self reports of birth defect pregnancy outcomes. The five tasks required methods for binary classification, multi-class classification, and named entity recognition (NER). With 29 teams and a total of 130 system submissions, participation in the #SMM4H shared tasks continues to grow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The aim of the Social Media Mining for Health Applications (#SMM4H) shared tasks is to take a community-driven approach to addressing natural language processing (NLP) challenges of utilizing social media data for health informatics, including informal, colloquial expressions of clinical concepts, noise, data sparsity, ambiguity, and multilingual posts. The fifth iteration of the #SMM4H shared tasks consisted of five tasks involving mining health-related information from Twitter data (tweets): automatic classification of tweets that mention medications (Task 1), automatic classification of multilingual tweets that report adverse effects of a medication (Task 2), with sub-tasks for distinct sets of tweets posted in English (Task 2a), French (Task 2b), and Russian (Task 2c), automatic extraction and normalization of adverse effects in English tweets (Task 3), automatic characterization of chatter related to prescription medication abuse in tweets (Task 4), and automatic classification of tweets self-reporting a birth defect pregnancy outcome (Task 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Teams could register for one or multiple tasks. In total, 57 teams registered for at least one task. To develop their systems, teams were provided with annotated training and validation sets of tweets for each task. For the final evaluation, teams were provided with an unlabeled test set for each task, and were allowed up to four days to submit the predictions of their systems to CodaLab 1 -a platform that facilitates data science competitions. Each team was allowed to submit up to three sets of predictions per task. In total, 29 of the 57 registered teams submitted at least one set of predictions. More specifically, 16 teams participated in Task 1 (40 submissions), 17 teams in Task 2a (35 submissions), 5 teams in Task 2b (7 submissions), 7 teams in Task 2c (14 submissions), 7 teams in Task 3 (15 submissions), 3 teams in Task 4 (9 submissions), and 4 teams in Task 5 (10 submissions). In Section 2, we will briefly describe the tasks. In Section 3, we will present the performance and a brief summary of each team's best-performing system for each task. Appendix A provides the system description papers corresponding to the team numbers used in Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Task 1 is a binary classification task that involves distinguishing tweets that mention a medication or dietary supplement (annotated as \"1\") from those that do not (annotated as \"0\"). For this task, we used the definition of drug products and dietary supplements provided by the FDA (U.S. Food and Drug Administration, 2017). For the #SMM4H 2018 shared tasks , a data set was used that contained an artificially balanced distribution of the two classes. For #SMM4H 2020, the data set represents their natural, highly imbalanced distribution . Evaluating classifiers on this data set models more closely the detection of tweets that mention medications in practice. The training set contains 69,272 tweets, with only 181 (0.3%) tweets that mention a medication. The 9622 training tweets from #SMM4H 2018 were also provided, with 4975 tweets that mention a medication. The test set contains 29,687 tweets, with only 77 (0.3%) tweets that mention a medication. Systems were evaluated based on the F 1 -score for the \"positive\" class (i.e., tweets that mention a medication).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks 2.1 Task 1: Automatic Classification of Tweets that Mention Medications",
"sec_num": "2"
},
{
"text": "Task 2 is a binary classification task that involves distinguishing tweets that report an adverse effect of a medication (annotated as \"1\") from those that do not (annotated as \"0\"), with three sub-tasks for distinct sets of tweets posted in English, French, and Russian. The training set for the long-running, Englishlanguage version of this #SMM4H shared task contains 25,678 tweets, with 2377 (9.3%) tweets that report an adverse effect of a medication. The test set contains 4759 tweets, with 194 (4.1%) tweets that report an adverse effect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 2: Automatic Classification of Multilingual Tweets that Report Adverse Effects",
"sec_num": "2.2"
},
{
"text": "For the French sub-task, the training set contains 2426 tweets, with only 39 (1.6%) tweets that report an adverse effect. The test set contains 607 tweets, with only 10 (1.6%) tweets that report an adverse effect. Inter-annotator agreement, based on dual annotations of 848 tweets by three annotators, was 0.61 and 0.69, for each of the two pairs of annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 2: Automatic Classification of Multilingual Tweets that Report Adverse Effects",
"sec_num": "2.2"
},
{
"text": "For the Russian sub-task, the training set contains 7612 tweets, with 666 (8.7%) tweets that report an adverse effect. The test set contains 1903 tweets, with 166 (8.7%) tweets that report an adverse effect. All of the Russian tweets were dual annotated; first, three Yandex.Toloka 2 annotators' crowd-sourced labels were aggregated into a single label (Dawid and Skene, 1979) , and then the tweets were labeled by a second annotator. Inter-annotator agreement was 0.49 (Cohen's kappa). Systems were evaluated based on the F 1 -score for the \"positive\" class (i.e., tweets that report an adverse effect).",
"cite_spans": [
{
"start": 353,
"end": 376,
"text": "(Dawid and Skene, 1979)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task 2: Automatic Classification of Multilingual Tweets that Report Adverse Effects",
"sec_num": "2.2"
},
{
"text": "Task 3 is a named entity recognition (NER) and entity normalization task that involves detecting the span of text within a tweet that reports an adverse effect of a medication, and normalizing the adverse effect to a unique Medical Dictionary for Regulatory Activities (MedDRA) 3 version 21.1 preferred term (PT) ID. The training set contains 2806 tweets, with 1829 (65%) tweets that report an adverse effect (annotated 29 as \"ADR\"). For each tweet in the training set that reports an adverse effect, the span of text containing the adverse effect, the character offsets of that span of text, and the MedDRA ID of the adverse effect. The test set contains 1156 tweets, with 970 (84%) that report an adverse effect. Systems were evaluated based on their F 1 -score, where a true positive is both the correct adverse effect (either partially or exactly matching the actual character offsets) and the correct MedDRA ID.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 3: Automatic Extraction and Normalization of Adverse Effects in English Tweets",
"sec_num": "2.3"
},
{
"text": "Task 4 is a multi-class classification task that involves automatically distinguishing tweets mentioning potentially abuse-prone medications into one of four categories: (1) potential abuse/misuse (annotated as \"A\"), (2) non-abuse/misuse consumption (annotated as \"C\"), (3) medication mention only without any indication of consumption (annotated as \"M\"), and (4) unrelated (annotated as \"U\"). The medications mentioned in the tweets include prescription opioids, benzodiazepines, atypical anti-psychotics, central nervous system stimulants, and GABA (gamma aminobutyric acid) analogues. The training set contains 13,172 tweets: (1) 2133 (16%) \"A\" tweets, (2) 3668 (28%) \"C\" tweets, (3) 6843 (52%) \"M\" tweets, and (4) 528 (4%) \"U\" tweets. The test set contains 3271 tweets: (1) 503 (15%) \"A\" tweets, (2) 919 (28%) \"C\" tweets, (3) 1722 \"M\" (53%) tweets, and (4) 127 (4%) \"U\" tweets. Additional details about the data set, including the annotation process, annotation guidelines, and inter-annotator agreements, are presented in recent work . Systems were evaluated based on the F 1 -score for the \"potential misuse/abuse\" (\"A\") class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 4: Automatic Characterization of Prescription Medication Abuse Chatter in Tweets",
"sec_num": "2.4"
},
{
"text": "Task 5 is a multi-class classification task that involves automatically distinguishing three classes of tweets that mention birth defects (Klein et al., 2018) : (1) \"defect\" tweets refer to the user's child and indicate that he or she has the birth defect mentioned in the tweet (annotated as \"1\"); (2) \"possible defect\" tweets are ambiguous about whether someone is the user's child and/or has the birth defect mentioned in the tweet (annotated as \"2\"); (3) \"non-defect\" tweets merely mention birth defects (annotated as \"3\"). The training set contains 18,397 tweets: 966 (5%) \"defect\" tweets, 1041 (6%) \"possible defect\" tweets, and 16,390 (89%) \"non-defect\" tweets. The test set contains 4602 tweets: 244 (5%) \"defect\" tweets, 258 (6%) \"possible defect\" tweets, and 4100 (89%) \"non-defect\" tweets. Inter-annotator agreement, based on dual annotations for 21,727 of the tweets, was 0.86 (Cohen's kappa). Systems were evaluated based on the micro-averaged F 1 -score for the \"defect\" and \"possible defect\" classes. Table 1 presents the precision, recall, and F 1 -score for the \"positive\" class (i.e., tweets that mention a medication), for each of the 16 team's best-performing system for Task 1. The majority of teams used a transformer-based architecture. Among these teams, the difference in performance seems to be based on the corpora used to pre-train the transformers, and the strategies used the address the high degree of class imbalance. The results suggest that imbalanced data remains a challenge for training deep neural network classifiers. The best-performing system for this task in #SMM4H 2018 achieved an F 1 -score of 0.918 (Chuhan et al., 2018) using an artificially balanced data set, while the best-performing system in #SMM4H 2020 achieved an F 1 -score of 0.854. Nonetheless, advances in transformer-based architectures and strategies for addressing class imbalance have improved upon the baseline F 1 -score of 0.788 . Table 2 presents the precision, recall, and F 1 -score for the \"positive\" class (i.e., English tweets that report an adverse effect of a medication), for each of the 17 team's best-performing system for Task 2a. As in Task 1, the majority of teams used a transformer-based architecture. In particular, most of the better- Table 2 : Task 2a (English) system summaries and F 1 -scores (F 1 ), precision (P), and recall (R) for the \"positive\" class (i.e., tweets reporting an adverse effect of a medication).",
"cite_spans": [
{
"start": 138,
"end": 158,
"text": "(Klein et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1016,
"end": 1023,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1946,
"end": 1953,
"text": "Table 2",
"ref_id": null
},
{
"start": 2268,
"end": 2275,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task 5: Automatic Classification of Tweets Reporting a Birth Defect Pregnancy Outcome",
"sec_num": "2.5"
},
{
"text": "performing systems used RoBERTa-based models (Liu et al., 2019) , with the best-performing system achieving an F 1 -score of 0.64. Table 3 presents the precision, recall, and F 1 -score for the \"positive\" class (i.e., French tweets that report an adverse effect of a medication), for each of the five team's best-performing system for Task 2b. The highest F 1 -score for the French-language version of this task is considerably lower than the highest F 1scores for the automatic classification of adverse effects in English (0.64) and Russian (0.51) tweets. The difficulty of this task is further underscored by the fact that two teams were not able to detect any tweets reporting an adverse effect. This difficulty may be due to the small size of the training data and the high degree of class imbalance. To address the imbalanced data, Team 22 used a Bayesian optimization approach to class weighting, and Team 16 used under-sampling of the majority class. Table 4 presents the precision, recall, and F 1 -score for the \"positive\" class (i.e., Russian tweets that report an adverse effect of a medication), for each of the seven team's best-performing system for Task 2c. Teams 26 and 25 achieve the highest F 1 -scores (0.51). Both teams used ensembles of BERT-based Russian language models from the DeepPavlov library (Burtsev et al., 2018) . In addition, both teams used manually annotated drug reviews from the RuDREC corpus as additional 0.00 0.00 0.00 camemBERT 29 0.00 0.00 0.00 NA Table 3 : Task 2b (French) system summaries and F 1 -scores (F 1 ), precision (P), and recall (R) for the \"positive\" class (i.e., tweets reporting an adverse effect of a medication).",
"cite_spans": [
{
"start": 45,
"end": 63,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 1322,
"end": 1344,
"text": "(Burtsev et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 3",
"ref_id": null
},
{
"start": 959,
"end": 966,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 1491,
"end": 1498,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automatic Classification of English Tweets that Report Adverse Effects",
"sec_num": "3.2.1"
},
{
"text": "training data, and Team 25 also used English drug reviews from the PsyTAR corpus ( (Zolnoori et al., 2019) . : Task 2c (Russian) system summaries and F 1 -scores (F 1 ), precision (P), and recall (R) for the \"positive\" class (i.e., tweets reporting an adverse effect of a medication). Table 5 presents the F 1 -scores for the NER-based extraction of adverse effect text spans, and the precision, recall, and F 1 -scores for the normalization to the MedDRA ID, for each of the seven team's bestperforming systems for Task 3. Team 25 outperformed the other teams for all the presented performance metrics. For the NER-based extraction, they used a transformer-based architecture with domain-specific models, dictionary-based features, and additional training data from the CADEC corpus (Karimi et al., 2015) . For normalization, they used a domain-specific, BERT-based classifier, additional training data, and similarity metrics comparing BERT-based word embeddings of Unified Medical Language System (UMLS) concepts and extracted NERs. Several other teams used similar approaches, so the performance of Team 26 might be attributed to their language models that were pre-trained specifically for detecting adverse drug reactions. 5: Task 3 system summaries, F 1 -scores (F 1 ) for adverse effect extraction (E), and F 1 -scores (F 1 ), precision (P), and recall (R) for adverse effect normalization (N). Table 6 presents the precision, recall, and F 1 -scores for the \"potential abuse/misuse\" class, for each team's best-performing system for Task 4. Team 13 achieved the highest F 1 -score (0.51) using a CNN, fastText word embeddings, and data augmentation by means of manufacturing tweets that are semantically similar to the training data. This F 1 -score, however, is lower than the F 1 -score (0.67) of a stacked ensemble of BERT (Devlin et al., 2019) , ALBERT (Lan et al., 2020) , and RoBERTa models, presented in recent work (Ali Al-Garadi et al., 2020) . Table 6 : Task 4 system summaries and F 1 -scores (F 1 ), precision (P), and recall (R) for the \"potential abuse/misuse\" class.",
"cite_spans": [
{
"start": 83,
"end": 106,
"text": "(Zolnoori et al., 2019)",
"ref_id": "BIBREF44"
},
{
"start": 784,
"end": 805,
"text": "(Karimi et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 1835,
"end": 1856,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 1866,
"end": 1884,
"text": "(Lan et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 1932,
"end": 1960,
"text": "(Ali Al-Garadi et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 285,
"end": 292,
"text": "Table 5",
"ref_id": null
},
{
"start": 1403,
"end": 1410,
"text": "Table 6",
"ref_id": null
},
{
"start": 1963,
"end": 1970,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automatic Classification of Russian Tweets that Report Adverse Effects",
"sec_num": "3.2.3"
},
{
"text": "3.5 Task 5: Automatic Classification of Tweets Reporting a Birth Defect Pregnancy Outcome Table 7 presents the micro-averaged precision, recall, and F 1 -score for the \"defect\" and \"possible defect\" classes, for each team's best-performing system. Teams 6 and 24 achieved the highest micro-averaged F 1 -scores (0.69). While Team 6 achieved a higher micro-averaged recall (0.73) than Team 24 (0.67) using a hard-voting ensemble of nine BioBERT-based models, Team 24 achieved a higher micro-averaged precision (0.71) than Team 6 (0.65) using ELMo word embeddings and data-specific resources for modeling birth defects, pregnancy-related information, people's names, and family relations. Team 19 also achieved a higher micro-averaged recall (0.69) than Team 24 (0.67) using BioBERT, but achieved a substantially lower micro-averaged precision (0.56) than Team 24 (0.71). Overall, for this imbalanced data, models based on contextualized word representations-BioBERT (Lee et al., 2020a) or ELMo (Peters et al., 2018) -outperformed a CNN-BiGRU neural network with GloVe word embeddings (Pennington et al., 2014) . Recent work presents baseline F 1 -scores of an SVM classifier for the \"defect\" (0.65) and \"possible defect\" (0.51) classes. 7: Task 5 system summaries and micro-averaged F 1 -score (F 1 ), precision (P), and recall (R) for the \"defect\" and \"possible defect\" classes.",
"cite_spans": [
{
"start": 965,
"end": 984,
"text": "(Lee et al., 2020a)",
"ref_id": "BIBREF22"
},
{
"start": 993,
"end": 1014,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF33"
},
{
"start": 1083,
"end": 1108,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task 4: Automatic Characterization of Prescription Medication Abuse Chatter in Tweets",
"sec_num": "3.4"
},
{
"text": "This paper presented an overview of the #SMM4H 2020 shared tasks. With 29 teams and a total of 130 system submissions, participation in the #SMM4H shared tasks continues to grow. All of the teams with the best-performing system for each task used deep learning-based systems, most of which were transformer-based architectures. The system description papers that are cited in Appendix A were each peer-reviewed by two reviewers and provide further details about 26 teams' systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "https://codalab.org/ 2 https://toloka.yandex.ru/ 3 https://www.meddra.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work for #SMM4H 2020 at the University of Pennsylvania was supported by the National Institutes of Health (NIH) National Library of Medicine (NLM) [grant number R01LM011176]. The work at Kazan Federal University was supported by the Russian Science Foundation [grant number 18-11-00284]. The authors would also like to thank Alexis Upshur for her contribution to annotating tweets, Dmitry Ustalov and other members of the Yandex.Toloka team for providing credits for the crowdsourced annotation of Russian tweets, and all those who reviewed system description papers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sentence contextual encoder with BERT and BiLSTM for automatic classification with imbalanced medication tweets",
"authors": [
{
"first": "Jialin",
"middle": [],
"last": "Olanrewaju Tahir Aduragba",
"suffix": ""
},
{
"first": "Gautham",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Senthilnathan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cristea",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "165--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olanrewaju Tahir Aduragba, Jialin Yu, Gautham Senthilnathan, and Alexandra Cristea. 2020. Sentence contextual encoder with BERT and BiLSTM for automatic classification with imbalanced medication tweets. In Proceed- ings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 165-167.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Text classification models for the automatic detection of nonmedical prescription medication use from social media",
"authors": [
{
"first": "Mohammed Ali Al-Garadi",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Yuan-Chi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Yucheng",
"middle": [],
"last": "Ruan",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Karen",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Jeanmarie",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Perrone",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sarker",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammed Ali Al-Garadi, Yuan-Chi Yang, Haitao Cai, Yucheng Ruan, Karen O'Connor, Graciela Gonzalez- Hernandez, Jeanmarie Perrone, and Abeed Sarker. 2020. Text classification models for the automatic detection of nonmedical prescription medication use from social media. medRxiv.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Identification of medication tweets using domainspecific pre-trained language models",
"authors": [
{
"first": "Yandrapati",
"middle": [],
"last": "Prakash Babu",
"suffix": ""
},
{
"first": "Rajagopal",
"middle": [],
"last": "Eswari",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "128--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yandrapati Prakash Babu and Rajagopal Eswari. 2020. Identification of medication tweets using domain- specific pre-trained language models. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 128-130.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "CLaC at SMM4H 2020: Birth defect mention detection",
"authors": [
{
"first": "Parsa",
"middle": [],
"last": "Bagherzadeh",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Bergler",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "168--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parsa Bagherzadeh and Sabine Bergler. 2020. CLaC at SMM4H 2020: Birth defect mention detection. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 168-170.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic detecting for health-related Twitter data with BioBERT",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "63--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Bai and Xiaobing Zhou. 2020. Automatic detecting for health-related Twitter data with BioBERT. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 63-69.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Transformer models for drug adverse effects detection from tweets",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "Blinov",
"suffix": ""
},
{
"first": "Manvel",
"middle": [],
"last": "Avetisian",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "110--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavel Blinov and Manvel Avetisian. 2020. Transformer models for drug adverse effects detection from tweets. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 110-112.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deeppavlov: Opensource library for dialogue systems",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Burtsev",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Seliverstov",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Airapetyan",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Arkhipov",
"suffix": ""
},
{
"first": "Dilyara",
"middle": [],
"last": "Baymurzina",
"suffix": ""
},
{
"first": "Nickolay",
"middle": [],
"last": "Bushkov",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Gureenkova",
"suffix": ""
},
{
"first": "Taras",
"middle": [],
"last": "Khakhulin",
"suffix": ""
},
{
"first": "Yurii",
"middle": [],
"last": "Kuratov",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Kuznetsov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL 2018, System Demonstrations",
"volume": "",
"issue": "",
"pages": "122--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail Burtsev, Alexander Seliverstov, Rafael Airapetyan, Mikhail Arkhipov, Dilyara Baymurzina, Nickolay Bushkov, Olga Gureenkova, Taras Khakhulin, Yurii Kuratov, Denis Kuznetsov, et al. 2018. Deeppavlov: Open- source library for dialogue systems. In Proceedings of ACL 2018, System Demonstrations, pages 122-127.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "FBK@SMM4H2020: RoBERTa for detecting medications on Twitter",
"authors": [
{
"first": "Silvia",
"middle": [],
"last": "Casola",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Lavelli",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "101--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silvia Casola and Alberto Lavelli. 2020. FBK@SMM4H2020: RoBERTa for detecting medications on Twitter. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 101-103.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Detecting tweets mentioning drug name and adverse drug reaction with hierarchical tweet representation and multi-head selfattention",
"authors": [
{
"first": "Wu",
"middle": [],
"last": "Wu Chuhan",
"suffix": ""
},
{
"first": "Liu",
"middle": [],
"last": "Fangzhao",
"suffix": ""
},
{
"first": "Wu",
"middle": [],
"last": "Junxin",
"suffix": ""
},
{
"first": "Huang",
"middle": [],
"last": "Sixing",
"suffix": ""
},
{
"first": "Xie",
"middle": [],
"last": "Yongfeng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "34--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu Chuhan, Wu Fangzhao, Liu Junxin, Wu Sixing, Huang Yongfeng, and Xie Xing. 2018. Detecting tweets mentioning drug name and adverse drug reaction with hierarchical tweet representation and multi-head self- attention. In Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop and Shared Task, pages 34-37.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Ensemble BERT for classifying medicationmentioning tweets",
"authors": [
{
"first": "Huong",
"middle": [
"N"
],
"last": "Dang",
"suffix": ""
},
{
"first": "Kahyun",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Henry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Uzuner",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "37--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huong N. Dang, Kahyun Lee, Sam Henry, and\u00d6zlem Uzuner. 2020. Ensemble BERT for classifying medication- mentioning tweets. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Work- shop & Shared Task, pages 37-41.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Maximum likelihood estimation of observer error-rates using the EM algorithm",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Philip Dawid",
"suffix": ""
},
{
"first": "Allan",
"middle": [
"M"
],
"last": "Skene",
"suffix": ""
}
],
"year": 1979,
"venue": "Journal of the Royal Statistical Society: Series C (Applied Statistics)",
"volume": "28",
"issue": "1",
"pages": "20--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Philip Dawid and Allan M. Skene. 1979. Maximum likelihood estimation of observer error-rates using the EM algorithm. Journal of the Royal Statistical Society: Series C (Applied Statistics), 28(1):20-28.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidi- rectional transformers for language understanding. In Proceedings of the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL- HLT), pages 4171-4186.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Approaching SMM4H 2020 with ensembles of BERT flavours",
"authors": [
{
"first": "George-Andrei",
"middle": [],
"last": "Dima",
"suffix": ""
},
{
"first": "Andrei-Marius",
"middle": [],
"last": "Avram",
"suffix": ""
},
{
"first": "Dumitru-Clementin",
"middle": [],
"last": "Cercel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "153--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George-Andrei Dima, Andrei-Marius Avram, and Dumitru-Clementin Cercel. 2020. Approaching SMM4H 2020 with ensembles of BERT flavours. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 153-157.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "How far can we go with just out-of-the-box BERT models?",
"authors": [
{
"first": "Lucie",
"middle": [],
"last": "Gattepaille",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "95--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucie Gattepaille. 2020. How far can we go with just out-of-the-box BERT models? In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 95-100.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Sentence transformers and Bayesian optimization for adverse drug effect detection from Twitter",
"authors": [
{
"first": "Oguzhan",
"middle": [],
"last": "Gencoglu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "161--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oguzhan Gencoglu. 2020. Sentence transformers and Bayesian optimization for adverse drug effect detection from Twitter. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 161-164.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "BERT implementation for detecting adverse drug effects mentions in Russian",
"authors": [
{
"first": "Andrey",
"middle": [],
"last": "Gusev",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Kuznetsova",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Polyanskaya",
"suffix": ""
},
{
"first": "Egor",
"middle": [],
"last": "Yatsishin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "46--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrey Gusev, Anna Kuznetsova, Anna Polyanskaya, and Egor Yatsishin. 2020. BERT implementation for detecting adverse drug effects mentions in Russian. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 46-50.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Want to identify, extract and normalize adverse drug reactions in tweets? Use RoBERTa",
"authors": [
{
"first": "Katikapalli",
"middle": [],
"last": "Subramanyam Kalyan",
"suffix": ""
},
{
"first": "Sivanesan",
"middle": [],
"last": "Sangeetha",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "121--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katikapalli Subramanyam Kalyan and Sivanesan Sangeetha. 2020. Want to identify, extract and normalize ad- verse drug reactions in tweets? Use RoBERTa. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 121-124.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Cadec: A corpus of adverse drug effect annotations",
"authors": [
{
"first": "Sarvnaz",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "Alejanrdo",
"middle": [],
"last": "Metke-Himenez",
"suffix": ""
},
{
"first": "Madonna",
"middle": [],
"last": "Kemp",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Biomedical Informatics",
"volume": "55",
"issue": "",
"pages": "73--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarvnaz Karimi, Alejanrdo Metke-Himenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug effect annotations. Journal of Biomedical Informatics, 55:73-81.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adverse drug reaction detection in Twitter using RoBERTa and rules",
"authors": [
{
"first": "Sedigheh",
"middle": [],
"last": "Khademi",
"suffix": ""
},
{
"first": "Frada",
"middle": [],
"last": "Pari Delir Haghighi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Burstein",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "113--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sedigheh Khademi, Pari Delir Haghighi, and Frada Burstein. 2020. Adverse drug reaction detection in Twitter using RoBERTa and rules. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 113-117.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Social media mining for birth defects research: A rule-based, bootstrapping approach to collecting data for rare healthrelated events on Twitter",
"authors": [
{
"first": "Ari",
"middle": [
"Z"
],
"last": "Klein",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Biomedical Informatics",
"volume": "87",
"issue": "",
"pages": "68--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Z. Klein, Abeed Sarker, Haitao Cai, Davy Weissenbacher, and Graciela Gonzalez-Hernandez. 2018. Social media mining for birth defects research: A rule-based, bootstrapping approach to collecting data for rare health- related events on Twitter. Journal of Biomedical Informatics, 87:68-78.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Towards scaling Twitter data for digital epidemiology of birth defects",
"authors": [
{
"first": "Ari",
"middle": [
"Z"
],
"last": "Klein",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2019,
"venue": "npj Digital Medicine",
"volume": "2",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Z. Klein, Abeed Sarker, Davy Weissenbacher, and Graciela Gonzalez-Hernandez. 2019. Towards scaling Twitter data for digital epidemiology of birth defects. npj Digital Medicine, 2:1-9.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "ALBERT: A lite BERT for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 8th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In Proceedings of the 8th International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "BioBERT: A pretrained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sundong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sundong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020a. BioBERT: A pretrained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Medication mention detection in tweets using ELECTRA transformers and decision trees",
"authors": [
{
"first": "Po-Han",
"middle": [],
"last": "Lung-Hao Lee",
"suffix": ""
},
{
"first": "Hao-Chuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ting-Chun",
"middle": [],
"last": "Kao",
"suffix": ""
},
{
"first": "Po-Lei",
"middle": [],
"last": "Hung",
"suffix": ""
},
{
"first": "Kuo-Kai",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shyu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "131--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lung-Hao Lee, Po-Han Chen, Hao-Chuan Kao, Ting-Chun Hung, Po-Lei Lee, and Kuo-Kai Shyu. 2020b. Medi- cation mention detection in tweets using ELECTRA transformers and decision trees. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 131-133.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "SpeechTrans@SMM4H'20: Impact of preprocessing and n-grams on automatic classification of tweets that mention medications",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Lichouri",
"suffix": ""
},
{
"first": "Mourad",
"middle": [],
"last": "Abbas",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "118--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Lichouri and Mourad Abbas. 2020. SpeechTrans@SMM4H'20: Impact of preprocessing and n-grams on automatic classification of tweets that mention medications. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 118-120.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "RoBERTa: A robustly optimized BERT pretraining approach. arXiv Preprint",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv Preprint, arXiv:1907.11692.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sentence classification with imbalanced data for health applications",
"authors": [
{
"first": "Liza",
"middle": [],
"last": "Farhana Ferdousi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "138--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farhana Ferdousi Liza. 2020. Sentence classification with imbalanced data for health applications. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 138-145.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "NLP@VCU: Identifying adverse effects in English tweets for unbalanced data",
"authors": [
{
"first": "Darshini",
"middle": [],
"last": "Mahendran",
"suffix": ""
},
{
"first": "Cora",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Bridget",
"middle": [
"T"
],
"last": "Mcinnes",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "158--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Darshini Mahendran, Cora Lewis, and Bridget T. McInnes. 2020. NLP@VCU: Identifying adverse effects in English tweets for unbalanced data. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 158-160.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Automatic classification of tweets mentioning a medication using pre-trained sentence encoders",
"authors": [
{
"first": "Laiba",
"middle": [],
"last": "Mehnaz",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "150--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laiba Mehnaz. 2020. Automatic classification of tweets mentioning a medication using pre-trained sentence encoders. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 150-152.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "SMM4H Shared Task 2020 -A hybrid pipeline for identifying prescription drug abuse from Twitter: Machine learning, deep learning, and post-processing",
"authors": [
{
"first": "Isabel",
"middle": [],
"last": "Metzger",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Emir",
"suffix": ""
},
{
"first": "Allison",
"middle": [],
"last": "Haskovic",
"suffix": ""
},
{
"first": "Whitley",
"middle": [
"M"
],
"last": "Black",
"suffix": ""
},
{
"first": "Rajat",
"middle": [
"S"
],
"last": "Yi",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"T"
],
"last": "Chandra1",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Rutledge",
"suffix": ""
},
{
"first": "Yindalon",
"middle": [],
"last": "Mcmahon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aphinyanaphongs",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "57--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabel Metzger, Emir Y. Haskovic, Allison Black, Whitley M. Yi, Rajat S. Chandra1, Mark T. Rutledge, William McMahon, and Yindalon Aphinyanaphongs. 2020. SMM4H Shared Task 2020 -A hybrid pipeline for identify- ing prescription drug abuse from Twitter: Machine learning, deep learning, and post-processing. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 57-62.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "KFU NLP Team at SMM4H 2020 Tasks: Cross-lingual transfer learning with pretrained language models for drug reactions",
"authors": [
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinova",
"suffix": ""
},
{
"first": "Andrey",
"middle": [],
"last": "Sakhovskiy",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "51--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zulfat Miftahutdinova, Andrey Sakhovskiy, and Elena Tutubalina. 2020. KFU NLP Team at SMM4H 2020 Tasks: Cross-lingual transfer learning with pretrained language models for drug reactions. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 51-56.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Promoting reproducible research for characterizing nonmedical use of medications through data annotation: Description of a Twitter corpus and guidelines",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Karen",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Jeanmarie",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Graciela",
"middle": [
"Gonzalez"
],
"last": "Perrone",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hernandez",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Medical Internet Research",
"volume": "22",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen O'Connor, Abeed Sarker, Jeanmarie Perrone, and Graciela Gonzalez Hernandez. 2020. Promoting repro- ducible research for characterizing nonmedical use of medications through data annotation: Description of a Twitter corpus and guidelines. Journal of Medical Internet Research, 22(2):e15861.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, and Matt Gardner. 2018. Deep contextualized word represen- tations. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2227-2237.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Detecting tweets reporting birth defect pregnancy outcome using two-view CNN RNN based architecture",
"authors": [
{
"first": "",
"middle": [],
"last": "Saichethan Miriyala Reddy",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "125--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saichethan Miriyala Reddy. 2020. Detecting tweets reporting birth defect pregnancy outcome using two-view CNN RNN based architecture. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 125-127.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Autobots ensemble: Identifying and extracting adverse drug reaction from tweets using transformer based pipelines",
"authors": [
{
"first": "Sougata",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Souvik",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Prashi",
"middle": [],
"last": "Khurana",
"suffix": ""
},
{
"first": "Rohini",
"middle": [
"K"
],
"last": "Srihari",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "104--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sougata Saha, Souvik Das, Prashi Khurana, and Rohini K. Srihari. 2020. Autobots ensemble: Identifying and extracting adverse drug reaction from tweets using transformer based pipelines. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 104-109.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "LITL at SMM4H: An old-school feature-based classifier for identifying adverse effects in tweets",
"authors": [
{
"first": "Ludovic",
"middle": [],
"last": "Tanguy",
"suffix": ""
},
{
"first": "Lydia-Mai",
"middle": [],
"last": "Ho-Dac",
"suffix": ""
},
{
"first": "C\u00e9cile",
"middle": [],
"last": "Fabre",
"suffix": ""
},
{
"first": "Roxane",
"middle": [],
"last": "Bois",
"suffix": ""
},
{
"first": "Touati Mohamed Yacine",
"middle": [],
"last": "Haddad",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Ibarboure",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Joyau",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Le Moal",
"suffix": ""
},
{
"first": "Jade",
"middle": [],
"last": "Moillic",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Roudaut",
"suffix": ""
},
{
"first": "Mathilde",
"middle": [],
"last": "Simounet",
"suffix": ""
},
{
"first": "Irena",
"middle": [],
"last": "Stankovic",
"suffix": ""
},
{
"first": "Mickaela",
"middle": [],
"last": "Vandewaetere",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "134--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ludovic Tanguy, Lydia-Mai Ho-Dac, C\u00e9cile Fabre, Roxane Bois, Touati Mohamed Yacine Haddad, Claire Ibar- boure, Marie Joyau, Fran\u00e7ois Le moal, Jade Moillic, Laura Roudaut, Mathilde Simounet, Irena Stankovic, and Mickaela Vandewaetere. 2020. LITL at SMM4H: An old-school feature-based classifier for identifying ad- verse effects in tweets. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 134-137.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "The Russian Drug Reaction Corpus and neural models for drug reactions and effectiveness detection in user reviews",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
},
{
"first": "Ilseyar",
"middle": [],
"last": "Alimova",
"suffix": ""
},
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Andrey",
"middle": [],
"last": "Sakhovskiy",
"suffix": ""
},
{
"first": "Valentin",
"middle": [],
"last": "Malykh",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Nikolenko",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Tutubalina, Ilseyar Alimova, Zulfat Miftahutdinov, Andrey Sakhovskiy, Valentin Malykh, and Sergey Nikolenko. 2020. The Russian Drug Reaction Corpus and neural models for drug reactions and effectiveness detection in user reviews. Bioinformatics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Drugs@fda glossary of terms",
"authors": [],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "U.S. Food and Drug Administration. 2017. Drugs@fda glossary of terms. https://www.fda.gov/drugs/ drug-approvals-and-databases/drugsfda-glossary-terms. [Drug; Drug Product; online, accessed 21-July-2020].",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Identifying medication abuse and adverse effects from tweets: University of Michigan at #SMM4H 2020",
"authors": [
{
"first": "V",
"middle": [
"G"
],
"last": "Vinod Vydiswaran",
"suffix": ""
},
{
"first": "Deahan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xinyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ermioni",
"middle": [],
"last": "Carr",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Martindale",
"suffix": ""
},
{
"first": "Jingcheng",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Noha",
"middle": [],
"last": "Ghannam",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Althoen",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Castellanos",
"suffix": ""
},
{
"first": "Neel",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Vasquez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "90--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V.G.Vinod Vydiswaran, Deahan Yu, Xinyan Zhao, Ermioni Carr, Jonathan Martindale, Jingcheng Xiao, Noha Ghannam, Matteo Althoen, Alexis Castellanos, Neel Patel, and Daniel Vasquez. 2020. Identifying medication abuse and adverse effects from tweets: University of Michigan at #SMM4H 2020. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 90-94.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "ISLab system for SMM4H Shared Task 2020",
"authors": [
{
"first": "Chen-Kai",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "You-Chen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Bo-Chun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Bo-Hong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "You-Ning",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Po-Hao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hong-Jie",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Chung-Hong",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "42--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen-Kai Wang, You-Chen Zhang, Bo-Chun Xu, Bo-Hong Wang, You-Ning Xu, Po-Hao Chen, Hong-Jie Dai, and Chung-Hong Lee. 2020. ISLab system for SMM4H Shared Task 2020. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 42-45.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Overview of the third social media mining for health (SMM4H) shared tasks at EMNLP",
"authors": [
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Paul",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "13--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davy Weissenbacher, Abeed Sarker, Michael J. Paul, and Graciela Gonzalez-Hernandez. 2018. Overview of the third social media mining for health (SMM4H) shared tasks at EMNLP 2018. In Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared Task, pages 13-16.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Deep neural networks for ensemble for detecting medication mentions in tweets",
"authors": [
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Karen",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Magge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of the American Medical Informatics Association",
"volume": "26",
"issue": "12",
"pages": "1618--1626",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davy Weissenbacher, Abeed Sarker, Ari Klein, Karen O'Connor, Arjun Magge, and Graciela Gonzalez-Hernandez. 2019. Deep neural networks for ensemble for detecting medication mentions in tweets. Journal of the American Medical Informatics Association, 26(12):1618-1626.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "HITSZ-ICRC: A report for SMM4H shared task 2020-Automatic classification of medications and adverse effect in tweets",
"authors": [
{
"first": "Xiaoyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Buzhou",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "146--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoyu Zhao, Ying Xiong, and Buzhou Tang. 2020. HITSZ-ICRC: A report for SMM4H shared task 2020- Automatic classification of medications and adverse effect in tweets . In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 146-149.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A systematic approach for developing a corpus of patient reported adverse drug events: A case study for SSRI and SNRI medications",
"authors": [
{
"first": "Maryam",
"middle": [],
"last": "Zolnoori",
"suffix": ""
},
{
"first": "Kin",
"middle": [
"Wah"
],
"last": "Fung",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"B"
],
"last": "Patrick",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Fontelo",
"suffix": ""
},
{
"first": "Hadi",
"middle": [],
"last": "Kharrazi",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Faiola",
"suffix": ""
},
{
"first": "Yi Shuan Shirley",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christina",
"middle": [
"E"
],
"last": "Eldredge",
"suffix": ""
},
{
"first": "Jake",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Conway",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Biomedical informatics",
"volume": "90",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maryam Zolnoori, Kin Wah Fung, Timothy B. Patrick, Paul Fontelo, Hadi Kharrazi, Anthony Faiola, Yi Shuan Shirley Wu, Christina E. Eldredge, Jake Luo, Mike Conway, et al. 2019. A systematic approach for developing a corpus of patient reported adverse drug events: A case study for SSRI and SNRI medications. Journal of Biomedical informatics, 90:103091.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"content": "<table><tr><td/><td colspan=\"4\">: Task 1 system summaries and F 1 -scores (F 1 ), precision (P), and recall (R) for the \"positive\"</td></tr><tr><td colspan=\"4\">class (i.e., tweets mentioning medications).</td><td/></tr><tr><td>Team</td><td>F 1</td><td>P</td><td>R</td><td>System Summary</td></tr><tr><td>21</td><td colspan=\"3\">0.64 0.62 0.65 RoBERTa</td><td/></tr><tr><td>25</td><td colspan=\"3\">0.58 0.63 0.54 EnDR-BERT, ensemble</td><td/></tr><tr><td>10</td><td colspan=\"4\">0.58 0.52 0.65 RoBERTa, SMM4H'17 and SMM4H'19 corpora</td></tr><tr><td>5</td><td colspan=\"3\">0.57 0.50 0.66 RoBERTa</td><td/></tr><tr><td>4</td><td colspan=\"3\">0.56 0.50 0.63 RoBERTa</td><td/></tr><tr><td>7</td><td colspan=\"4\">0.56 0.56 0.55 RoBERTa, sub-corpus ensemble, rules</td></tr><tr><td>17</td><td colspan=\"4\">0.55 0.47 0.65 BERT, DrugBank, MedlinePlus, TransE MeSH representations</td></tr><tr><td>6</td><td colspan=\"4\">0.54 0.49 0.60 BioBERT, data augmentation, ensemble</td></tr><tr><td>1</td><td colspan=\"3\">0.51 0.48 0.54 CLAPA, BERT</td><td/></tr><tr><td>22</td><td colspan=\"4\">0.48 0.44 0.53 SBERT RoBERTa sentence embeddings, class weights</td></tr><tr><td>2</td><td colspan=\"3\">0.47 0.58 0.40 BERT, SMM4H'20 Task 3 corpus</td><td/></tr><tr><td>19</td><td colspan=\"3\">0.37 0.26 0.60 BioBERT pre-trained on tweets</td><td/></tr><tr><td>20</td><td colspan=\"4\">0.35 0.28 0.46 CNN, GloVe word embeddings pre-trained on tweets, under-sampling</td></tr><tr><td>16</td><td colspan=\"4\">0.32 0.19 0.87 SVM, sent2vec sentence and bi-gram embeddings pre-trained on tweets, under-sampling</td></tr><tr><td>28</td><td colspan=\"3\">0.31 0.23 0.51 NA</td><td/></tr><tr><td>15</td><td colspan=\"4\">0.31 0.31 0.31 logistic regression, feature engineering</td></tr><tr><td>29</td><td colspan=\"3\">0.27 0.16 0.79 NA</td><td/></tr></table>",
"html": null,
"text": "",
"type_str": "table"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>Team</td><td>F 1</td><td>P</td><td>R</td><td>System Summary</td></tr><tr><td>22</td><td colspan=\"4\">0.17 0.15 0.20 SBERT DistilBERT sentence embeddings, class weights</td></tr><tr><td>15</td><td colspan=\"4\">0.15 0.33 0.10 logistic regression, feature engineering</td></tr><tr><td>16</td><td>0.07 0</td><td/><td/><td/></tr></table>",
"html": null,
"text": ".04 0.60 tree-based ensemble, LASER sentence embeddings, under-sampling 4",
"type_str": "table"
},
"TABREF4": {
"num": null,
"content": "<table/>",
"html": null,
"text": "",
"type_str": "table"
},
"TABREF6": {
"num": null,
"content": "<table/>",
"html": null,
"text": "",
"type_str": "table"
},
"TABREF7": {
"num": null,
"content": "<table><tr><td>Team</td><td>F 1</td><td>P</td><td>R</td><td>System Summary</td></tr><tr><td>13</td><td>0.51 0</td><td/><td/><td/></tr></table>",
"html": null,
"text": ".53 0.50 CNN, fastText word embeddings, data augmentation 1 0.49 0.46 0.51 SVM, under-sampling 16 0.46 0.35 0.67 SVM, sent2vec sentence and bi-gram embeddings pre-trained on tweets, under-sampling",
"type_str": "table"
},
"TABREF8": {
"num": null,
"content": "<table><tr><td>Team</td><td>F 1</td><td>P</td><td>R</td><td>System Summary</td></tr><tr><td>6</td><td>0.69</td><td/><td/><td/></tr></table>",
"html": null,
"text": "0.65 0.73 BioBERT, data augmentation, ensemble 24 0.69 0.71 0.67 ELMo, GCNN, ANNIE NER, medical and family relations lexicons 19 0.62 0.56 0.69 BioBERT pre-trained on tweets 11 0.58 0.54 0.64 GloVe word and hashtag embeddings pre-trained on tweets, CNN, BiGRU",
"type_str": "table"
},
"TABREF9": {
"num": null,
"content": "<table/>",
"html": null,
"text": "",
"type_str": "table"
}
}
}
}