| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:35:21.624817Z" |
| }, |
| "title": "Conversation-Aware Filtering of Online Patient Forum Messages", |
| "authors": [ |
| { |
| "first": "Anne", |
| "middle": [], |
| "last": "Dirkson", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Leiden University Niels", |
| "location": { |
| "addrLine": "Bohrweg 1", |
| "postCode": "2333 CA", |
| "settlement": "Leiden", |
| "country": "the Netherlands" |
| } |
| }, |
| "email": "a.r.dirkson@liacs.leidenuniv.nl" |
| }, |
| { |
| "first": "Suzan", |
| "middle": [], |
| "last": "Verberne", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Leiden University Niels", |
| "location": { |
| "addrLine": "Bohrweg 1", |
| "postCode": "2333 CA", |
| "settlement": "Leiden", |
| "country": "the Netherlands" |
| } |
| }, |
| "email": "s.verberne@liacs.leidenuniv.nl" |
| }, |
| { |
| "first": "Wessel", |
| "middle": [], |
| "last": "Kraaij", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Leiden University Niels", |
| "location": { |
| "addrLine": "Bohrweg 1", |
| "postCode": "2333 CA", |
| "settlement": "Leiden", |
| "country": "the Netherlands" |
| } |
| }, |
| "email": "w.kraaij@liacs.leidenuniv.nl" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Previous approaches to NLP tasks on online patient forums have been limited to single posts as units, thereby neglecting the overarching conversational structure. In this paper we explore the benefit of exploiting conversational context for filtering posts relevant to a specific medical topic. We experiment with two approaches to add conversational context to a BERT model: a sequential CRF layer and manually engineered features. Although neither approach can outperform the F 1 score of the BERT baseline, we find that adding a sequential layer improves precision for all target classes whereas adding a non-sequential layer with manually engineered features leads to a higher recall for two out of three target classes. Thus, depending on the end goal, conversation-aware modelling may be beneficial for identifying relevant messages. We hope our findings encourage other researchers in this domain to move beyond studying messages in isolation towards more discourse-based data collection and classification. We release our code for the purpose of follow-up research. 1", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Previous approaches to NLP tasks on online patient forums have been limited to single posts as units, thereby neglecting the overarching conversational structure. In this paper we explore the benefit of exploiting conversational context for filtering posts relevant to a specific medical topic. We experiment with two approaches to add conversational context to a BERT model: a sequential CRF layer and manually engineered features. Although neither approach can outperform the F 1 score of the BERT baseline, we find that adding a sequential layer improves precision for all target classes whereas adding a non-sequential layer with manually engineered features leads to a higher recall for two out of three target classes. Thus, depending on the end goal, conversation-aware modelling may be beneficial for identifying relevant messages. We hope our findings encourage other researchers in this domain to move beyond studying messages in isolation towards more discourse-based data collection and classification. We release our code for the purpose of follow-up research. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In the past decade, social media has emerged as a source of valuable knowledge in the health domain (Gonzalez-Hernandez et al., 2017) , for instance during the COVID-19 pandemic (Sarker et al., 2020) (Klein et al., 2020) . In order to use social media to answer a medical question, it is necessary to identify posts on the forum that are relevant to the question at hand e.g. posts mentioning adverse drug responses (ADRs) (Li et al., 2020) , personal experiences (Dirkson et al., 2019) , medication abuse (Sarker et al., 2016) or medical misinformation (Kinsora et al., 2017) . This filtering step is often the first step of the analysis pipeline. In this paper, we will refer to this specific type of filtering as relevance classification.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 133, |
| "text": "(Gonzalez-Hernandez et al., 2017)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 178, |
| "end": 199, |
| "text": "(Sarker et al., 2020)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 200, |
| "end": 220, |
| "text": "(Klein et al., 2020)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 423, |
| "end": 440, |
| "text": "(Li et al., 2020)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 464, |
| "end": 486, |
| "text": "(Dirkson et al., 2019)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 506, |
| "end": 527, |
| "text": "(Sarker et al., 2016)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 554, |
| "end": 576, |
| "text": "(Kinsora et al., 2017)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Previous automatic methods for medical relevance classification generally consider posts as units without context, thereby ignoring any information that can be gained from conversational context. One example of such an approach is the recent shared task on ADR relevance classification (Weissenbacher et al., 2019 ). Yet, including the conversational context may prove beneficial to relevance classification, as responses in a thread often relate to previous responses. For example, responses to a question or comment about a specific side effect are likely to also concern this side effect. To test this hypothesis, we investigate how positive labels are distributed across and within conversational threads.", |
| "cite_spans": [ |
| { |
| "start": 286, |
| "end": 313, |
| "text": "(Weissenbacher et al., 2019", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "At present, only one study into medical relevance classification has included some engineered features to capture aspects of the conversational structure (Kinsora et al., 2017) . However, as this study includes only two discourse-based features, the effect of including manually engineered features that capture conversational structure is still largely unknown for relevance classification tasks.", |
| "cite_spans": [ |
| { |
| "start": 154, |
| "end": 176, |
| "text": "(Kinsora et al., 2017)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Furthermore, including the relation between posts on a discourse level may also be able to improve classifier performance. Each post serves a conversational function in a dialogue, e.g. a question, explanation or statement (Austin, 1962) . These functions are called dialogue acts (Stolcke et al., 2000) . We have not found any study that included dialogue acts as features for medical relevance classification.", |
| "cite_spans": [ |
| { |
| "start": 223, |
| "end": 237, |
| "text": "(Austin, 1962)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 281, |
| "end": 303, |
| "text": "(Stolcke et al., 2000)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As an alternative to using manually engineered features, conversational threads can also be modelled with a sequential model. This has proven beneficial in other fields such as rumor classification in social media discussions (Zubiaga et al., 2018) . As of yet, the use of sequential models for medical relevance classification has also not been explored.", |
| "cite_spans": [ |
| { |
| "start": 226, |
| "end": 248, |
| "text": "(Zubiaga et al., 2018)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We address the following research questions in this paper:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "RQ1 To what extent can the addition of a sequential model on top of state-of-the-art non-sequential models improve medical relevance classification of social media data?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "RQ2 To what extent can the addition of manually engineered features for conversational structure and discourse improve medical relevance classification?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We use two different datasets for answering our questions. In our current research, we are particularly interested in discovering ADRs in online discussions. We have collected and annotated a dataset about this topic. Since this dataset is new, no other results have been published for it. We therefore use one other dataset for evaluating our methods: the medical misinformation dataset by Kinsora et al. (2017) . We use a BERT-based model as baseline. BERT models constitute the current state of the art for most NLP tasks (Devlin et al., 2019) including ADR relevance classification (Weissenbacher et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 391, |
| "end": 412, |
| "text": "Kinsora et al. (2017)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 525, |
| "end": 546, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 586, |
| "end": 614, |
| "text": "(Weissenbacher et al., 2019)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the following section, we will elaborate on related work. Hereafter, we describe our methodology and data in Section 3 and 4 respectively. Finally, we present and discuss our results in Section 5 and 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The use of conversational structure for improving the performance of classifiers of social media posts is prevalent in the field of rumor classification (Zubiaga et al., 2018) and related fields like disagreement detection (Rosenthal and McKeown, 2015) . Conversational structure has previously been exploited through (a) manually engineered features or (b) sequential classifiers.", |
| "cite_spans": [ |
| { |
| "start": 153, |
| "end": 175, |
| "text": "(Zubiaga et al., 2018)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 223, |
| "end": 252, |
| "text": "(Rosenthal and McKeown, 2015)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The most commonly employed engineered features to model the conversational structure are the similarity to the previous message and to the thread in general (Zubiaga et al., 2018) . In addition to these features, the current state-of-the-art model on a leading shared task for rumor stance classification (RumourEval-2019) uses the label of the previous message and the distance to the start of the thread (Li et al., 2019) . In the health domain, the only study that employs manually engineered features for conversational structure is Kinsora et al. (2017) . Specifically, they use the running count of positive labels and the distance to the previous positive label. In this study, we will employ the above features as well as expand upon them with additional discourse-related features.", |
| "cite_spans": [ |
| { |
| "start": 157, |
| "end": 179, |
| "text": "(Zubiaga et al., 2018)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 406, |
| "end": 423, |
| "text": "(Li et al., 2019)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 537, |
| "end": 558, |
| "text": "Kinsora et al. (2017)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Other studies have used sequential classifiers to model the discursive nature of social media, although according to Zubiaga et al. (2018) this is \"still in its infancy\" (p. 276). Their comparison of various classifiers for rumor stance classification revealed that sequential classifiers outperform non-sequential classifiers overall. This is probably due to their ability to leverage information about sequential structure and preceding labels. Furthermore, Zubiaga et al. (2018) found that sequential classifiers did not benefit from contextual features representing thread context (e.g. similarity to the source tweet) whereas nonsequential classifiers did. They speculate that sequential classifiers take the surrounding context into account implicitly. To see if this also holds true for relevance classification in medical social media, we will compare the addition of conversation-aware features to both sequential and non-sequential models.", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 138, |
| "text": "Zubiaga et al. (2018)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 460, |
| "end": 481, |
| "text": "Zubiaga et al. (2018)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "CRF As a sequential model we use Conditional Random Fields (CRF). We train the models using the implementation in sklearn-crfsuite. L1 and L2 regularization parameters were tuned for each fold.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Linear SVM As a non-sequential counterpart, we use the sklearn implementation of Linear Support Vector Machines. The hyper-parameter C is tuned per fold with a grid of 10 -3 to 10 3 in steps of \u00d710. DistilBERT As BERT model, we opt for DistilBERT (distilbert-base-uncased), which is a lighter, more computationally efficient variant of BERT (Sanh et al., 2019) . We use the Huggingface implementation (Wolf et al., 2019) with the wrapper ktrain (Maiya, 2020) to train our models. The initialization seed is set to 1. We use the default learning rate of 5 \u00d7 10 \u22125 and tune the number of epochs (3 or 4) per fold.", |
| "cite_spans": [ |
| { |
| "start": 341, |
| "end": 360, |
| "text": "(Sanh et al., 2019)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 401, |
| "end": 420, |
| "text": "(Wolf et al., 2019)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Ensemble models To investigate the benefit of adding a sequential model on top of the DistilBERT model, we experiment with a blending-based ensemble method: we input the raw confidence scores from DistilBERT for each label as features in a CRF model (i.e. CRF + BERTpred). We create an equivalent non-sequential baseline by using the same approach with an SVM (i.e. SVM + BERTpred).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To explore the benefit of manually engineered features that capture thread context, we use step-wise greedy forward feature selection using the features in Table 1 . For each step-wise iteration, we select the best feature to add to the model until the F 1 score no longer improves. We use 10-fold cross-validation in which per fold features are selected on the development data (10%) and tested on a held-out test set (10%). For a fair comparison, we keep folds and hyper-parameters the same as for the respective base model. Since the label distribution features could leak information, we omit these gold annotated features for evaluation. Instead, we perform an initial run without these features and use the resulting predictions to calculate them for the final evaluation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 156, |
| "end": 163, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Feature analysis", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We used 10-fold cross validation in all experiments. Instead of splitting per message, we split on whole discussion threads to ensure possible dependencies between posts do not bias the outcome. Statistical comparisons of model performance are done using Wilcoxon signed rank tests across the 10 folds. To avoid the multiple testing problem, we only compare the three best models -namely those with the highest F 1 score, precision and recall -to the BERT baseline.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model comparison", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Data collection At present, there is only one publicly available medical relevance classification data set that includes the conversational structure: the Medical Misinformation Data set (Kinsora et al., 2017) . It is based on MedHelp data and annotated for the presence of misinformation. We collected a second data set from a Facebook group of Gastro Intestinal Stromal Tumor (GIST) patients. We selected 527 discussions based on their likelihood to contain an ADR: We selected the threads that contained (1) at least one drug name according to a match with RxNorm (U.S. National Library of Medicine, 2020) and", |
| "cite_spans": [ |
| { |
| "start": 187, |
| "end": 209, |
| "text": "(Kinsora et al., 2017)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(2) a high percentage of posts in which authors shared experiences. The latter criterion was included since sharing that you had an ADR is an example of experience sharing. To estimate this, we used a previously developed classifier (Dirkson et al., 2019) . According to our classifier, at least 80% of the posts within each selected thread is a personal experience. Due to privacy issues and ownership of the data by the GIST International patient organization, we are not able to share this data set at present. See Table 2 for more details on the data sets.", |
| "cite_spans": [ |
| { |
| "start": 233, |
| "end": 255, |
| "text": "(Dirkson et al., 2019)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 518, |
| "end": 525, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Data annotation Following a pilot annotation round, the data was annotated by the first author and three patients for the presence of ADRs and coping strategies for dealing with ADRs (hereafter also called: Strategies) using an annotation guideline. 3 The pair-wise inter-annotator agreement was substantial for ADR (mean \u03ba =0.71) and moderate for Coping Strategies (mean \u03ba =0.54).", |
| "cite_spans": [ |
| { |
| "start": 250, |
| "end": 251, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As visualized in Figure 1a , the target class is not distributed equally across the discussion threads for any of the data sets; There appear to be many threads with few or no target posts. According to z-tests, the distribution is significantly different from normal. An inspection of the relative position of target posts within discussion threads reveals that target posts also cluster together (see Figure 1b) . The probability that the post after a target post is also a target post is 27% for Misinformation and 40% and 34% for ADRs and Coping Strategies respectively. These probabilities are higher than is to be expected based on the percentage of positively labelled posts (see Table 2 ). Thus, it appears that the conversational structure is indeed related to the probability of a post being relevant and consequently incorporating conversational structure or discourse may be able to improve performance of relevance classifiers. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 17, |
| "end": 26, |
| "text": "Figure 1a", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 403, |
| "end": 413, |
| "text": "Figure 1b)", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 687, |
| "end": 694, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Distribution of the target class in the discussion threads", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The results of model evaluation are presented in Table 3 . It appears that neither the addition of a sequential layer nor manual features can improve upon the F 1 score of the BERT model. Misinformation detection appears to be the exception to this; any additional layer, sequential or not, outperforms the BERT baseline model. The highest F 1 is attained by an SVM model based on USE sentence vectors (+Emb), which were specifically designed for representing whole sentences. Perhaps sentence vectors perform better than BERT embeddings when the BERT model performs poorly (F 1 = 0.366). Additional research will be necessary to substantiate this. Despite a lack of improvement in the F 1 score for the detection of ADR and Strategies, an additional layer does seem to offer flexibility in tailoring the model towards a higher recall or precision. On the one hand, recall can be improved for two target classes by adding a non-sequential SVM layer with manual features to the BERT model. On the other hand, precision can be improved through the addition of a sequential CRF layer on top of BERT predictions for all target classes. Adding manually engineered features in addition to the sequential layer only improves the precision further for the detection of coping strategies. Our findings are thereby in line with Zubiaga et al. (2018) . They speculated that sequential classifiers may take the surrounding context into account implicitly and therefore do not benefit from features representing thread context.", |
| "cite_spans": [ |
| { |
| "start": 1318, |
| "end": 1339, |
| "text": "Zubiaga et al. (2018)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 49, |
| "end": 56, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model comparison", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The only significant increase according to Wilcoxon signed rank tests is in the precision for ADR detection. This may be related to the high variance between folds. Further research is necessary to validate these results and advance our understanding of how conversation-aware modelling can be best be used for relevance classification. We believe that this first study shows that this is a promising direction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model comparison", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "There is large variation in which features are selected per fold. Manual inspection of the selected features shows that features relating to the distribution of labels in the thread are chosen most often, especially the running count of negative and positive labels in the thread (CountNeg, CountPos), and the label of Table 1 ). Features of this type may therefore be the most promising for future work. The number of features that is chosen is more consistent; On average, 1 or 2 of the 11 features are chosen.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 319, |
| "end": 326, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis of selected features", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "To further explore why certain features are chosen, we compute the correlations between the target label and the manually engineered features and between the BERT predictions and the manually engineered features (see Figure 2 ). We find, firstly, that features relating to the label distribution indeed appear to correlate most strongly with the ground truth labels. Secondly, the correlation between these features and the BERT predictions is often equal to or stronger than the respective correlation to the ground truth. This might indicate that this variance is already captured by the BERT model and therefore manually engineered features have little to add to the baseline model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 217, |
| "end": 225, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis of selected features", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We find that the distribution of target posts across discussion threads is skewed and that within a conversational thread posts cluster together. Thus, our hypothesis that the probability of a target post occurring is related to the conversational structure appears valid.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In answer to RQ1, we find that a sequential CRF layer on top of a BERT model improves precision slightly, although only significantly so for ADR detection. In answer to RQ2, we find that the addition of manually engineered features representing thread context often does not aid performance. The one consistent exception is when combined with a non-sequential SVM layer on top of a BERT model. This combination can improve recall for all target classes, although not significantly. An additional layer on top of a BERT model that is able to capture the thread context appears to offer flexibility in tailoring the model towards a higher recall or precision. In future work, we plan to investigate the benefit of including conversational context for other tasks such as concept normalization of ADR.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "For all the data sets included in this study, a pre-selection of discussion threads was made prior to annotation to ensure a higher proportion of target posts. We expect that both sequential models and manually engineered features of thread context may prove more beneficial when such a pre-selection does not take place and the target class is even more imbalanced. Thus, our results may be an underestimation of the benefit of conversational context for finding 'needles in the haystack'.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Finally, our findings call into question the practice of splitting data into folds without taking the discussion context into account. In this study, we split the folds per discussion thread and we recommend others to consider doing so when dealing with multiple posts from the same thread, as neglecting to do so when there are dependencies between posts may bias model performance. This is especially important when threads contain duplicate posts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Our code is available at: https://github.com/AnneDirkson/ConversationAwareFiltering This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We opt for USE instead of BERT embeddings, as cosine similarity cannot be applied directly to BERT embeddings", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Available at: https://github.com/AnneDirkson/ConversationAwareFiltering", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was funded by the SIDN fonds. We would like to thank our annotators for their time and effort and our anonymous reviewers for their feedback.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "How to do things with words", |
| "authors": [ |
| { |
| "first": "Austin", |
| "middle": [], |
| "last": "John Langshaw", |
| "suffix": "" |
| } |
| ], |
| "year": 1962, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Langshaw Austin. 1962. How to do things with words. Oxford university press.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Universal sentence encoder for English", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Cer", |
| "suffix": "" |
| }, |
| { |
| "first": "Yinfei", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Sheng-Yi", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "Nan", |
| "middle": [], |
| "last": "Hua", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicole", |
| "middle": [], |
| "last": "Limtiaco", |
| "suffix": "" |
| }, |
| { |
| "first": "Rhomni", |
| "middle": [], |
| "last": "St", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "Mario", |
| "middle": [], |
| "last": "Constant", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Guajardo-Cespedes", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Yuan", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Tar", |
| "suffix": "" |
| }, |
| { |
| "first": "Ray", |
| "middle": [], |
| "last": "Strope", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kurzweil", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "169--174", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169-174, Brussels, Belgium, November. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL-HLT.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Narrative Detection in Online Patient Communities", |
| "authors": [ |
| { |
| "first": "Anne", |
| "middle": [], |
| "last": "Dirkson", |
| "suffix": "" |
| }, |
| { |
| "first": "Suzan", |
| "middle": [], |
| "last": "Verberne", |
| "suffix": "" |
| }, |
| { |
| "first": "Wessel", |
| "middle": [], |
| "last": "Kraaij", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Text2StoryIR'19 Workshop. CEUR-WS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anne Dirkson, Suzan Verberne, and Wessel Kraaij. 2019. Narrative Detection in Online Patient Communities. In A. Jorge, R. Campos, A. Jatowt, and S. Bhatia, editors, Proceedings of the Text2StoryIR'19 Workshop. CEUR- WS.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Capturing the Patient's Perspective : a Review of Advances in Natural Language Processing of Health-Related Text", |
| "authors": [ |
| { |
| "first": "Graciela", |
| "middle": [], |
| "last": "Gonzalez-Hernandez", |
| "suffix": "" |
| }, |
| { |
| "first": "Abeed", |
| "middle": [], |
| "last": "Sarker", |
| "suffix": "" |
| }, |
| { |
| "first": "O '", |
| "middle": [], |
| "last": "Karen", |
| "suffix": "" |
| }, |
| { |
| "first": "Guergana", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Savova", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Yearbook of medical informatics", |
| "volume": "", |
| "issue": "", |
| "pages": "214--217", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Graciela Gonzalez-Hernandez, Abeed Sarker, Karen O 'Connor, and Guergana Savova. 2017. Capturing the Patient's Perspective : a Review of Advances in Natural Language Processing of Health-Related Text. Yearbook of medical informatics, pages 214-217.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Creating a Labeled Dataset for Medical Misinformation in Health Forums", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Kinsora", |
| "suffix": "" |
| }, |
| { |
| "first": "Kate", |
| "middle": [], |
| "last": "Barron", |
| "suffix": "" |
| }, |
| { |
| "first": "Qiaozhu", |
| "middle": [], |
| "last": "Mei", |
| "suffix": "" |
| }, |
| { |
| "first": "Vinod", |
| "middle": [], |
| "last": "Vydiswaran", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "IEEE International Conference on Healthcare Informatics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander Kinsora, Kate Barron, Qiaozhu Mei, and Vinod Vydiswaran. 2017. Creating a Labeled Dataset for Medical Misinformation in Health Forums. In IEEE International Conference on Healthcare Informatics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A chronological and geographical analysis of personal reports of covid-19 on twitter", |
| "authors": [ |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Arjun", |
| "middle": [], |
| "last": "Magge", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Karen", |
| "suffix": "" |
| }, |
| { |
| "first": "Haitao", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "Davy", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "Graciela", |
| "middle": [], |
| "last": "Weissenbacher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gonzalez-Hernandez", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ari Klein, Arjun Magge, Karen O'Connor, Haitao Cai, Davy Weissenbacher, and Graciela Gonzalez-Hernandez. 2020. A chronological and geographical analysis of personal reports of covid-19 on twitter. medRxiv (preprint).", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "eventAI at SemEval-2019 Task 7: Rumor Detection on Social Media by Exploiting Content, User Credibility and Propagation Information", |
| "authors": [ |
| { |
| "first": "Quanzhi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Qiong", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Luo", |
| "middle": [], |
| "last": "Si", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings ofthe 13th International Workshop on Semantic Evaluation (SemEval-2019)", |
| "volume": "", |
| "issue": "", |
| "pages": "855--859", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Quanzhi Li, Qiong Zhang, and Luo Si. 2019. eventAI at SemEval-2019 Task 7: Rumor Detection on Social Media by Exploiting Content, User Credibility and Propagation Information. In Proceedings ofthe 13th International Workshop on Semantic Evaluation (SemEval-2019), pages 855-859.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Exploiting adversarial transfer learning for adverse drug reaction detection from texts", |
| "authors": [ |
| { |
| "first": "Zhiheng", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhihao", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ling", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Xiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongfei", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Journal of Biomedical Informatics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhiheng Li, Zhihao Yang, Ling Luo, Yang Xiang, and Hongfei Lin. 2020. Exploiting adversarial transfer learning for adverse drug reaction detection from texts. Journal of Biomedical Informatics, page 103431.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "ktrain: A low-code library for augmented machine learning. arXiv", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Arun", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Maiya", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2004.10703" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arun S. Maiya. 2020. ktrain: A low-code library for augmented machine learning. arXiv, arXiv:2004.10703 [cs.LG].", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "I couldn't agree more: The role of conversational structure in agreement and disagreement detection in online discussions", |
| "authors": [ |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Rosenthal", |
| "suffix": "" |
| }, |
| { |
| "first": "Kathy", |
| "middle": [], |
| "last": "Mckeown", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue", |
| "volume": "", |
| "issue": "", |
| "pages": "168--177", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sara Rosenthal and Kathy McKeown. 2015. I couldn't agree more: The role of conversational structure in agreement and disagreement detection in online discussions. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 168-177, Prague, Czech Republic, September. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", |
| "authors": [ |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Sanh", |
| "suffix": "" |
| }, |
| { |
| "first": "Lysandre", |
| "middle": [], |
| "last": "Debut", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Chaumond", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "5th Workshop on Energy Efficient Machine Learning and Cognitive Computing -NeurIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In 5th Workshop on Energy Efficient Machine Learning and Cogni- tive Computing -NeurIPS 2019.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Social Media Mining for Toxicovigilance: Automatic Monitoring of Prescription Medication Abuse from Twitter", |
| "authors": [ |
| { |
| "first": "Abeed", |
| "middle": [], |
| "last": "Sarker", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Karen", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Ginn", |
| "suffix": "" |
| }, |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Scotch", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "Graciela", |
| "middle": [], |
| "last": "Malone", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gonzalez", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Drug Safety", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abeed Sarker, Karen O'connor, Rachel Ginn, Matthew Scotch, Karen Smith, Dan Malone, and Graciela Gonzalez. 2016. Social Media Mining for Toxicovigilance: Automatic Monitoring of Prescription Medication Abuse from Twitter. Drug Safety, 39.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Self-reported COVID-19 symptoms on Twitter: an analysis and a research resource", |
| "authors": [ |
| { |
| "first": "Abeed", |
| "middle": [], |
| "last": "Sarker", |
| "suffix": "" |
| }, |
| { |
| "first": "Sahithi", |
| "middle": [], |
| "last": "Lakamana", |
| "suffix": "" |
| }, |
| { |
| "first": "Whitney", |
| "middle": [], |
| "last": "Hogg-Bremer", |
| "suffix": "" |
| }, |
| { |
| "first": "Angel", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammed", |
| "middle": [ |
| "Ali" |
| ], |
| "last": "Al-Garadi", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuan-Chi", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Journal of the American Medical Informatics Association", |
| "volume": "27", |
| "issue": "8", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abeed Sarker, Sahithi Lakamana, Whitney Hogg-Bremer, Angel Xie, Mohammed Ali Al-Garadi, and Yuan-Chi Yang. 2020. Self-reported COVID-19 symptoms on Twitter: an analysis and a research resource. Journal of the American Medical Informatics Association, 27(8):1310-1315, 07.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Dialogue act modeling for automatic tagging and recognition of conversational speech", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| }, |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Ries", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [], |
| "last": "Coccaro", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Shriberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Bates", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Taylor", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "Carol", |
| "middle": [], |
| "last": "Van Ess-Dykema", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie", |
| "middle": [], |
| "last": "Meteer", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Computational Linguistics", |
| "volume": "26", |
| "issue": "3", |
| "pages": "339--374", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3):339-374.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Affective Behaviour Analysis of On-line User Interactions: Are On-line Support Groups more Therapeutic than Twitter?", |
| "authors": [ |
| { |
| "first": "Giuliano", |
| "middle": [], |
| "last": "Tortoreto", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Evgeny", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandra", |
| "middle": [], |
| "last": "Stepanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Mateusz", |
| "middle": [], |
| "last": "Cervone", |
| "suffix": "" |
| }, |
| { |
| "first": "Giuseppe", |
| "middle": [], |
| "last": "Dubiel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Riccardi", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings ofthe 4th Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "79--88", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Giuliano Tortoreto, Evgeny A Stepanov, Alessandra Cervone, Mateusz Dubiel, and Giuseppe Riccardi. 2019. Affective Behaviour Analysis of On-line User Interactions: Are On-line Support Groups more Therapeutic than Twitter? In Proceedings ofthe 4th Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 79-88.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Overview of the Fourth Social Media Mining for Health (#SMM4H) Shared Task at ACL 2019", |
| "authors": [ |
| { |
| "first": "Davy", |
| "middle": [], |
| "last": "Weissenbacher", |
| "suffix": "" |
| }, |
| { |
| "first": "Abeed", |
| "middle": [], |
| "last": "Sarker", |
| "suffix": "" |
| }, |
| { |
| "first": "Arjun", |
| "middle": [], |
| "last": "Magge", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashlynn", |
| "middle": [], |
| "last": "Daughton", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Karen", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "Graciela", |
| "middle": [], |
| "last": "Paul", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gonzalez-Hernandez", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings ofthe 4th Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "21--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Davy Weissenbacher, Abeed Sarker, Arjun Magge, Ashlynn Daughton, Karen O'Connor, Michael Paul, and Gra- ciela Gonzalez-Hernandez. 2019. Overview of the Fourth Social Media Mining for Health (#SMM4H) Shared Task at ACL 2019. In Proceedings ofthe 4th Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 21-30.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "R\u00e9mi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Transformers: State-of-the-art Natural Language Processing", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| }, |
| { |
| "first": "Lysandre", |
| "middle": [], |
| "last": "Debut", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Sanh", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Chaumond", |
| "suffix": "" |
| }, |
| { |
| "first": "Clement", |
| "middle": [], |
| "last": "Delangue", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Moi", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierric", |
| "middle": [], |
| "last": "Cistac", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Rault", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Transformers: State-of-the-art Natural Language Processing. ArXiv.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Discourse-Aware Rumour Stance Classification in Social Media Using Sequential Classifiers. Information Processing & Management", |
| "authors": [ |
| { |
| "first": "Arkaitz", |
| "middle": [], |
| "last": "Zubiaga", |
| "suffix": "" |
| }, |
| { |
| "first": "Elena", |
| "middle": [], |
| "last": "Kochkina", |
| "suffix": "" |
| }, |
| { |
| "first": "Maria", |
| "middle": [], |
| "last": "Liakata", |
| "suffix": "" |
| }, |
| { |
| "first": "Rob", |
| "middle": [], |
| "last": "Procter", |
| "suffix": "" |
| }, |
| { |
| "first": "Michal", |
| "middle": [], |
| "last": "Lukasik", |
| "suffix": "" |
| }, |
| { |
| "first": "Kalina", |
| "middle": [], |
| "last": "Bontcheva", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Cohn", |
| "suffix": "" |
| }, |
| { |
| "first": "Isabelle", |
| "middle": [], |
| "last": "Augenstein", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "54", |
| "issue": "", |
| "pages": "273--390", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, Michal Lukasik, Kalina Bontcheva, Trevor Cohn, and Isabelle Augenstein. 2018. Discourse-Aware Rumour Stance Classification in Social Media Using Sequen- tial Classifiers. Information Processing & Management, 54(2):273-390.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "The relative position of target posts to each other within threads Distribution of the target class (i.e. positively labelled posts)", |
| "num": null |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Correlation matrix of ground truth labels and BERT predictions with the manually engineered features. The size and colour of the squares corresponds to the strength of the correlation the previous post (PrevLbl) (see", |
| "num": null |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "Manually engineered features to model conversational structure" |
| }, |
| "TABREF3": { |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "Statistics on the data sets. The ADR Discussions data set has two target classes." |
| }, |
| "TABREF5": { |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "Evaluation results of mean model performance over 10 folds. Features are selected through step-wise greedy feature selection." |
| } |
| } |
| } |
| } |