ACL-OCL / Base_JSON /prefixN /json /nlpcss /2020.nlpcss-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:49:21.486495Z"
},
"title": "Using BERT for Qualitative Content Analysis in Psycho-Social Online Counseling",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Grandeit",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nuremberg Institute of Technology Georg Simon Ohm",
"location": {
"settlement": "Nuremberg",
"country": "Germany"
}
},
"email": "grandeitph64509@th-nuernberg.de"
},
{
"first": "Carolyn",
"middle": [],
"last": "Haberkern",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nuremberg Institute of Technology Georg Simon Ohm",
"location": {
"settlement": "Nuremberg",
"country": "Germany"
}
},
"email": "haberkernca76525@th-nuernberg.de"
},
{
"first": "Maximiliane",
"middle": [],
"last": "Lang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nuremberg Institute of Technology Georg Simon Ohm",
"location": {
"settlement": "Nuremberg",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Jens",
"middle": [],
"last": "Albrecht",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nuremberg Institute of Technology Georg Simon Ohm",
"location": {
"settlement": "Nuremberg",
"country": "Germany"
}
},
"email": "albrechtje@th-nuernberg.de"
},
{
"first": "Robert",
"middle": [],
"last": "Lehmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nuremberg Institute of Technology Georg Simon Ohm",
"location": {
"settlement": "Nuremberg",
"country": "Germany"
}
},
"email": "lehmannro@th-nuernberg.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Qualitative content analysis is a systematic method commonly used in the social sciences to analyze textual data from interviews or online discussions. However, this method usually requires high expertise and manual effort because human coders need to read, interpret, and manually annotate text passages. This is especially true if the system of categories used for annotation is complex and semantically rich. Therefore, qualitative content analysis could benefit greatly from automated coding. In this work, we investigate the usage of machine learning-based text classification models for automatic coding in the area of psychosocial online counseling. We developed a system of over 50 categories to analyze counseling conversations, labeled over 10.000 text passages manually, and evaluated the performance of different machine learning-based classifiers against human coders.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Qualitative content analysis is a systematic method commonly used in the social sciences to analyze textual data from interviews or online discussions. However, this method usually requires high expertise and manual effort because human coders need to read, interpret, and manually annotate text passages. This is especially true if the system of categories used for annotation is complex and semantically rich. Therefore, qualitative content analysis could benefit greatly from automated coding. In this work, we investigate the usage of machine learning-based text classification models for automatic coding in the area of psychosocial online counseling. We developed a system of over 50 categories to analyze counseling conversations, labeled over 10.000 text passages manually, and evaluated the performance of different machine learning-based classifiers against human coders.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Online counseling has developed into a fullfledged psycho-social counseling service in Germany since the 1990s. Today, people can get advice on a wide variety of psycho-social topics in web forums and dedicated text-based counseling platforms. Online counseling is provided by psychosocial professionals who have received special training in this method. Similar to face-to-face psycho-social counseling, some aspects are known to make up high-quality online counseling, but there is few empirical evidence for special impact factors (Fukkink et al 2009 , Dowling & Rickwood 2014 .",
"cite_spans": [
{
"start": 534,
"end": 553,
"text": "(Fukkink et al 2009",
"ref_id": "BIBREF28"
},
{
"start": 554,
"end": 579,
"text": ", Dowling & Rickwood 2014",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Psycho-Social Online Counseling",
"sec_num": "1.1"
},
{
"text": "Due to the complexity of the content, quantitative approaches have not been able to analyze the meaning and significance of methodical patterns in large numbers of consulting communications (Navarro et al. 2019) . It is, however, possible to understand and describe the meaning of online counseling content with qualitative approaches (Bambling et al. 2008 , Gatti et al. 2016 .",
"cite_spans": [
{
"start": 190,
"end": 211,
"text": "(Navarro et al. 2019)",
"ref_id": "BIBREF46"
},
{
"start": 335,
"end": 356,
"text": "(Bambling et al. 2008",
"ref_id": "BIBREF5"
},
{
"start": 357,
"end": 376,
"text": ", Gatti et al. 2016",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Psycho-Social Online Counseling",
"sec_num": "1.1"
},
{
"text": "This allows linking certain interventions of the counselors to the reactions of the clients on a caseby-case basis. But generalized statements on causal relationships are not possible with the small number of cases from qualitative studies (Ersahin & Hanley 2017 ).",
"cite_spans": [
{
"start": 240,
"end": 262,
"text": "(Ersahin & Hanley 2017",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Psycho-Social Online Counseling",
"sec_num": "1.1"
},
{
"text": "An analysis of large numbers of counseling conversations using qualitative social research tools would help to better understand how successful online counseling works. Few related studies on these topics are available. Althoff et al. (2016) defined different models to measure general conversation strategies like adaptability, dealing with ambiguity, creativity, making progress or change in perspective and illustrated their applicability on a corpus of data from SMS counseling. P\u00e9rez-Rosas et al. (2019) analyzed the quality of consulting communications based on video recordings. Their automatic classifier used linguistic aspects of the content and could predict counseling quality with relatively good accuracy. However, neither of the mentioned approaches had the intention to recognize the meaning of individual phrases even though this deep understanding is crucial to eliminate weaknesses in the education of online counselors (Luitgaarden et al. 2016 , Niuewboer et al. 2014 . In addition, systems could be developed to provide online advisors with practical suggestions for improving their work.",
"cite_spans": [
{
"start": 220,
"end": 241,
"text": "Althoff et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 483,
"end": 508,
"text": "P\u00e9rez-Rosas et al. (2019)",
"ref_id": "BIBREF50"
},
{
"start": 939,
"end": 963,
"text": "(Luitgaarden et al. 2016",
"ref_id": null
},
{
"start": 964,
"end": 987,
"text": ", Niuewboer et al. 2014",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Psycho-Social Online Counseling",
"sec_num": "1.1"
},
{
"text": "Qualitative social research is a generic term for various research approaches. It attempts to gain a better understanding of people's social realities and to draw attention to recurring processes, patterns of interpretation, and structural characteristics (Kergel, 2018) .",
"cite_spans": [
{
"start": 256,
"end": 270,
"text": "(Kergel, 2018)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Content Analysis",
"sec_num": "1.2"
},
{
"text": "One such research approach deals with the content analysis of texts, the so-called qualitative content analysis according to Mayring (2015) . It is a central source of scientific knowledge in qualitative social research. It tries to determine the subjective meaning of contents in texts. For this purpose, categories are formed based on known scientific theories on the topic and the discursive examination of the content. The definitions of those categories along with representative text passages are summarized in a codebook.",
"cite_spans": [
{
"start": 125,
"end": 139,
"text": "Mayring (2015)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Content Analysis",
"sec_num": "1.2"
},
{
"text": "Then, human coders are coached in using the codebook. The coaching process and the implementation of the coding require high human expertise and manual effort because the coders must read, interpret, and annotate each text passage. Thus, qualitative studies can only be applied to a limited number of texts. Furthermore, it is hardly possible to define the categories so precisely that all coders find identical results, as human language is inherently ambiguous and its interpretation always partly subjective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Content Analysis",
"sec_num": "1.2"
},
{
"text": "Machine learning could be a solution to the dilemma: If a trained model was able to categorize parts of the conversations according to a given codebook with similar accuracy as a human, the time-consuming text analysis could be automated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Content Analysis",
"sec_num": "1.2"
},
{
"text": "Previous studies have shown that supervised machine learning is generally suitable for qualitative content analysis (Crowston e.a. 2010 , Scharkow 2013 . However, these studies used only a few categories that could be distinguished relatively good, e.g. news categories like sports and business. Online counseling, in contrast, is a complex domain. A detailed system of categories is necessary to identify impactful patterns in counseling conversations. Additionally, many categories such as \"Empathy\" or \"Compassion\" are quite similar in terms of the words used and can only be distinguished if the model is able to somehow \"understand\" the meaning of the texts.",
"cite_spans": [
{
"start": 116,
"end": 135,
"text": "(Crowston e.a. 2010",
"ref_id": null
},
{
"start": 136,
"end": 151,
"text": ", Scharkow 2013",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Learning for Qualitative Content Analysis",
"sec_num": "1.3"
},
{
"text": "Recent neural models have drastically outperformed previous approaches for sophisticated problems like sentiment analysis and emotion detection (Howard&Ruder 2018, Devlin e.a. 2018, Chatterjee e.a. 2019). We wanted to investigate if these models can be used for qualitative content analysis of online counseling conversations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Learning for Qualitative Content Analysis",
"sec_num": "1.3"
},
{
"text": "Our first research question is whether it is possible to train a model to identify psycho-social codes with a human-like precision. It also needs to be clarified whether a certain machine learning approach is particularly well suited for certain topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research questions / Contribution",
"sec_num": "1.4"
},
{
"text": "It is assumed that this training does not work equally well with all codes of the codebook. Therefore, the second question is which characteristics codes must have in order to be learned particularly well or particularly poorly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research questions / Contribution",
"sec_num": "1.4"
},
{
"text": "In social science research, the discussion of different assessments of text passages is an important part of the scientific process. Therefore, the analysis of codes incorrectly assigned by a model is an important part of this work. The third research question is, therefore: What differences can be observed between the machine and human coding of text passages? If the deviations are plausible, they can be perceived as enriching the discursive process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research questions / Contribution",
"sec_num": "1.4"
},
{
"text": "For the experimental evaluation, the social scientists in our interdisciplinary team created a codebook consisting of over 50 fine-grained categories and labeled over 10.000 text sequences of psychosocial counseling conversations (described in Section 2). The computer scientists then trained and evaluated a support-vector machine and different state-of-the-art models (e.g. ULMFit and BERT) on the provided data set (Section 3). Finally, the team investigated how human coders from the social sciences perform in comparison to the BERT model on a subset of the data (Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Structure of the Paper",
"sec_num": "1.5"
},
{
"text": "Online forums for psycho-social counseling provide a good basis for an empirical evaluation because they contain large amounts of publicly accessible data. For our study, we used posts from a German site for parent counseling. Here, parents who have problems in bringing up their children are seeking advice. Possible topics are, for example, drug abuse by the child or inadequate school performance. A user can start a new thread with a problem description. Professional counselors reply and discuss solution approaches with the initial user and others. Thus, each thread contains a series of posts with questions and suggestions about the initially described problem. Since we are especially interested in counseling patterns, we focused on the posts of professional counselors in our analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Creating the Data Set",
"sec_num": "2"
},
{
"text": "Based on existing scientific theories (Fukkink et al 2009 , Dowling & Rickwood 2014 on online counseling and first analyses of the text content, a first version of the codebook was created. The various aspects expected in counseling conversations were mapped to a logical hierarchical structure (see Figure 1) . The top-level covers general counseling aspects, such as \"General attitudes\" or \"Impact factors\". On the intermediate level, these aspects were distinguished more finely, e.g. \"Help for problem overcoming\". The categories at the lowest level are the ones to be used for the annotation of the text passages, such as \"Recommendation for action\" or \"Warning / forecast\".",
"cite_spans": [
{
"start": 38,
"end": 57,
"text": "(Fukkink et al 2009",
"ref_id": "BIBREF28"
},
{
"start": 58,
"end": 83,
"text": ", Dowling & Rickwood 2014",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 300,
"end": 309,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Development of the Codebook",
"sec_num": "2.1"
},
{
"text": "The different codes were defined as precisely as possible and provided with typical examples. The team of coders applied this codebook to the counseling texts in several turns and iteratively improved the codebook. The final version consists of 51 granular categories (see Appendix A).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development of the Codebook",
"sec_num": "2.1"
},
{
"text": "Based on the codebook described in Section 2.1, a team of coding social scientists manually labeled over 10.000 text sequences in 336 threads. Such a sequence can consist of only a few words (e.g. a greeting) or even multiple sentences (e.g. a recommended action). Sequences, however, do not overlap, i.e. each word should be part of only one labeled sequence. See Figure 2 to get an idea.",
"cite_spans": [],
"ref_spans": [
{
"start": 365,
"end": 373,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Labeling",
"sec_num": "2.2"
},
{
"text": "In the end, we obtained a heavily imbalanced data set: The average number of samples per category is about 200, but the numbers vary greatly (see Appendix A for more details). For some categories in the area \"Impact factors\", e.g. \"Evaluation / understanding / calming\" or \"Experience / explanation / example\" we obtained over 1000 samples, whereas other categories including \"Change\" or \"Suggestion to put oneself in a problem situation physically\" are barely represented. Such an unequal distribution of the frequencies of single codes is not unusual in the social sciences. Since there is no statistical analysis in qualitative research, this is usually not a problem. There are even some research approaches that consider the analysis of very rare codes, in particular, to be extremely insightful (Glaser 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Labeling",
"sec_num": "2.2"
},
{
"text": "After labeling, we tested the impact of common preprocessing techniques like lemmatization and the removal of usernames. It turned out that both, the support-vector machine classifier as well as the BERT model work best without any of these techniques. Therefore, we used the labeled data without such modifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation and Preprocessing",
"sec_num": "2.3"
},
{
"text": "However, the BERT model can only process fixed-length sequences consisting of at most 512 subword units called WordPiece tokens (Vaswani et al., 2017) . Thus, we restricted the sequence length for all training data. We decided to work with a limit of only 256 WordPiece tokens. This value provides a good trade-off between performance and resource consumption in our setting. Longer sequences yield potentially more accurate results but generate a high overhead because all sequences must be padded to the specified length.",
"cite_spans": [
{
"start": 128,
"end": 150,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation and Preprocessing",
"sec_num": "2.3"
},
{
"text": "Since only a little more than 1% of the complete data samples contain more than 256 WordPiece tokens, we did not lose much information (cf . Table 1) . Instead, the trade-off in length allowed using higher batch sizes and faster training.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 150,
"text": ". Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Preparation and Preprocessing",
"sec_num": "2.3"
},
{
"text": "To make the results of the different classifiers comparable and to take the data set imbalance into account, a stratified 70-30-train test split was performed on the data set. This results in a training data set with 7169 samples and a test data set with 3072 samples in total. See Appendix A for the number of samples in each category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation and Preprocessing",
"sec_num": "2.3"
},
{
"text": "As a result of the created codebook and the collected data, our classification task consists of classifying psycho-social text sequences into one of 51 categories. For the training of the classifiers, the data set described in the previous section with 7169 samples is used. The created models are then evaluated against the 3072 samples in our test data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model-Based Classification of Psycho-Social Text Sequences",
"sec_num": "3"
},
{
"text": "The support-vector machine (SVM) is a commonly used classifier due to being lightweight, benefitting from fast training times, and still achieving good results in text classification tasks (Aggarwal, 2018, pp. 12) . Therefore, the SVM was chosen as a baseline model. The prepared data was transformed into TF-IDF vectors (bag-of-words) for training and evaluation (Aggarwal, 2018, pp. 24-26) . The model was implemented using the scikitlearn library. The hyperparameters used were chosen according to the results of our hyperparameter tuning. Apart from the default parameters of the TF-IDF-vectorizer, a max_df-value of 0.5 and a min_df-value of 0 was used. Additionally, the inverse-document-frequency reweighting was enabled and unigrams, as well as bigrams, were considered. The support-vector classifier itself used a sigmoid kernel with the gamma value set to \"scale\", a C-value of 10, and enabled probability estimates which internally enables 5-fold crossvalidation.",
"cite_spans": [
{
"start": 189,
"end": 213,
"text": "(Aggarwal, 2018, pp. 12)",
"ref_id": null
},
{
"start": 364,
"end": 391,
"text": "(Aggarwal, 2018, pp. 24-26)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Support-Vector Machine as a Baseline",
"sec_num": "3.1"
},
{
"text": "The SVM achieved a total accuracy of 68.8% on the test data (cf. Table 2 ). Due to the heavily imbalanced data set, however, the total accuracy is not a good indicator of the model's performance. Thus, we also calculated the macro and weighted F1 scores. The SVM achieves a weighted F1 score of 68.0% (close to the accuracy) and a macro F1 score of 39.7%. The low macro F1 indicates, that classes with little support are frequently misclassified.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Support-Vector Machine as a Baseline",
"sec_num": "3.1"
},
{
"text": "A detailed analysis of the results shows that the SVM achieves quite good results in categories with a large number of training samples. For instance, an F1 score of 76.2 % is achieved in the category \"Experience / explanation / example\" with 1398 training and 599 test sequences. Furthermore, simple sequences that only contain few keywords, such as greeting phrases in the category \"Start of conversation\", can also be identified quite well, even though only a few training samples exist. In particular, the category \"General salutation\" achieves an F1 score of 75.0% while only having 22 training and 9 test samples. More complex categories, such as the expression of \"Empathy for others\", however, achieve lower F1 scores of 59.8% even with a relatively high number of 118 training and 51 test samples. Other categories like \"Warning / forecast\" achieve even lower F1 scores of only 29.3% even though having 71 training and 30 test samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support-Vector Machine as a Baseline",
"sec_num": "3.1"
},
{
"text": "BERT is a multi-layer bidirectional Transformer encoder based on the original Transformer implementation described in Vaswani et al. (2017) . BERT is typically pre-trained on two unsupervised learning tasks. After the pre-training, the model can be fine-tuned according to the downstream task (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 118,
"end": 139,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF58"
},
{
"start": 293,
"end": 315,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT as Advanced Classifier",
"sec_num": "3.2"
},
{
"text": "For the classification task in our approach, we used the BertForSequenceClassification implementation from the Hugging Face's Transformers library (Wolf et al., 2019) that combines the BERT Transformer model with a sequence classification head on top (Hugging Face, 2020).",
"cite_spans": [
{
"start": 147,
"end": 166,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT as Advanced Classifier",
"sec_num": "3.2"
},
{
"text": "In total, we tested thirteen pre-trained BERT models. Among the ten tested German language models, the results varied between a weighted F1 score of 69.3% and 74.4% on the test data set, whereby the best result was achieved with the pre- The hyperparameters used for the fine-tuning were taken from the original BERT publication (Devlin e.a., 2018). Since we are using text sequences with a length of 256 WordPiece tokens, a batch size value of no more than 16 was possible due to GPU memory limitations. Larger models, especially multi-lingual models, even only allowed a batch size of 8. Further testing has shown that the best results can be achieved with a learning rate of 2e-5 and 4 epochs. Table 2 shows the different evaluation metrics for both, the SVM and the best BERT classifier.",
"cite_spans": [],
"ref_spans": [
{
"start": 697,
"end": 704,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "BERT as Advanced Classifier",
"sec_num": "3.2"
},
{
"text": "The low macro F1 score with 29.2% of the BERT classifier compared to the 39.7% of the SVM classifier shows that the BERT classifier performs significantly worse on classes with few samples than the SVM classifier. The result of the weighted F1 score of 74.4% of the BERT model compared to the 68.0% of the SVM model, however, indicates that the BERT classifier outperforms the SVM if the whole data set is considered. Table 3 shows an extract from the classification report. In general, the BERT classifier improves in its performance with the increase in available training samples for each class.",
"cite_spans": [],
"ref_spans": [
{
"start": 418,
"end": 425,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analyzing the Classification Results",
"sec_num": "3.3"
},
{
"text": "In specific categories, such as \"Empathy for others\", this observation is not true. Categories with this behavior often contain previously mentioned category-specific keywords or phrases which is why the simple bag-of-words approach outperforms the more complex BERT techniques from a statistical point of view. A detailed analysis of the misclassified sequences by the BERT model, however, has shown that the classification of these sequences is not inherently wrong but rather shows suitable alternative affiliations to categories. This behavior is examined in greater detail in Section 3.6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing the Classification Results",
"sec_num": "3.3"
},
{
"text": "In addition to BERT, other classification models, such as DistilBERT (Sanh et al., 2019) , XLM-RoBERTa (Conneau et al., 2019) , XLM (Lample and Conneau, 2019) , and ULMFit (Howard and Ruder, 2018) were examined in our study as well. Table 4 shows the best weighted F1 scores of each model. The DistilBERT model performs around 4% worse than the best BERT model on our test data set. This difference lies around the range described by the authors of the DistilBERT paper (Sanh et al., 2019) . In addition to that, both the XLM-RoBERTa and XLM models also perform worse than the best BERT classifier. Apart from the Transformer approaches, the bidirectional RNN model called ULMFit was also analyzed. The results show that the different Transformer models as well as the ULMFit model generally perform quite similar on our classification task, except for the XLM model that performs even worse than the simple SVM approach. beiro et al., 2016) or Attention Flow (Abnar and Zuidema, 2020) , can be used to generate model insights. While LIME takes a retrospective approach that can be applied to any classification model, Attention Flow tries to visualize the actual attention maps of Transformer models. Both approaches provide insights that can be used to explain the classification predictions of the models. Since we want to generate model insights regardless of the approach used to create the model, we decided to use LIME as our analyzing tool of choice.",
"cite_spans": [
{
"start": 69,
"end": 88,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF55"
},
{
"start": 103,
"end": 125,
"text": "(Conneau et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 132,
"end": 158,
"text": "(Lample and Conneau, 2019)",
"ref_id": "BIBREF42"
},
{
"start": 172,
"end": 196,
"text": "(Howard and Ruder, 2018)",
"ref_id": "BIBREF36"
},
{
"start": 470,
"end": 489,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF55"
},
{
"start": 922,
"end": 941,
"text": "beiro et al., 2016)",
"ref_id": null
},
{
"start": 960,
"end": 985,
"text": "(Abnar and Zuidema, 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 233,
"end": 240,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Examining other Classification Models",
"sec_num": "3.4"
},
{
"text": "For example, the analysis of the sentence \"Have you ever spoken to the kindergarten teachers?\" (cf. original German sentence in Figure 3 ) helps to further understand the model. Originally, the sequence was coded as \"Follow-up question\" by the expert coders. The BERT classifier did correctly classify this sequence, whereas the SVM classifier classified this sequence as a \"Questions about possible support resources\".",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 136,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Explaining the Classifiers",
"sec_num": "3.5"
},
{
"text": "While both assignments might sound reasonable at first, the question arises why each classifier performed its prediction. To answer this question, the text-heatmaps in Figure 3 were generated with LIME. The percentage values indicate how important the LIME model considers the corresponding word for the classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 176,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Explaining the Classifiers",
"sec_num": "3.5"
},
{
"text": "The BERT heatmap shows that the model mainly focuses on the words that form the question \"Hast\", \"Du\", \"mit\", \"den\", \"Erzieherinnen\" (Engl. \"have\", \"you\", \"with\", \"kindergarten teachers\") while the SVM heatmap shows that the SVM classifier considers all words as important for the classification but with high focus on the word \"Erzieherinnen\" (Engl. kindergarten teachers) which is a possible support resource.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining the Classifiers",
"sec_num": "3.5"
},
{
"text": "This strong focus on individual keywords from the SVM can be explained by the operating principle of the bag-of-words approach and verifies the assumption from Section 3.3 that the SVM performs well in classes with distinctive keywords. But examples like this show that this simple approach can also be misled when such distinctive keywords appear in more complex sequences in which the keyword is not decisive for the correct class and the context has to be considered as well for the correct classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining the Classifiers",
"sec_num": "3.5"
},
{
"text": "Since LIME follows a bag-of-words evaluation model, it cannot provide additional insights on how our BERT model exactly handles context. Thus, we can only use LIME to illustrate whether the models' decisions are reasonable, or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining the Classifiers",
"sec_num": "3.5"
},
{
"text": "To better understand our model and to identify further potential for improvement, the incorrectly classified test data were analyzed. Out of the 3072 test sequences, the BERT model classified 2325 sequences correctly. Out of the 747 incorrectly classified sequences, our team of social scientists manually examined a sample of 191 sequences. The inspected samples were randomly chosen based on conspicuous categories that were not in the diagonal of the confusion matrix. The summarized results of this examination are shown in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 528,
"end": 535,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Analyzing Misclassified Sequences",
"sec_num": "3.6"
},
{
"text": "The general conclusion of this analysis is that 58.1% (Table 5, I+II) of the incorrectly classified sequences are not inherently wrong but their assigned category depends on the different points of view of the coders. For example, the sequence \"Have you ever talked to a pediatrician? Or do you have a family counseling center?\" was initially encoded as a \"Question about possible support resources\" by the human encoder, whereas the BERT model associated the sequence with a \"Follow-up question\". In our analysis, the experts concluded that both categories would fit. Another example in which the predicted label would fit even better than the actual label is the sequence \"This has to be done consequently, even if screaming is annoying. You have to go through it -sometime.\" This sequence was initially encoded as a \"Warning / forecast\" by the human experts. The BERT model, however, assigned this sequence to the category of \"Recommendation for action\". Since these different interpretation options are not only a technical issue but can also be observed in human coders, the intercoder reliability between an expert coder, an untrained human coder (\"novice\"), and BERT is analyzed in Section 4. For another 23.6% of the analyzed sequences (Table 5 , III+IV), we were able to trace back the incorrect classification to the use of keywords or similar terms between different categories. For example, the simple sequence \"good luck\" is considered to be a \"Wish\" by the human encoders, whereas our BERT model mistakes this sequence for a traditional farewell phrase (category \"Other farewell\"). This behavior of the BERT model can be explained by the fact that some sequences in the training data contain closing phrases, such as \"Good luck [user] \".",
"cite_spans": [
{
"start": 1742,
"end": 1748,
"text": "[user]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1244,
"end": 1252,
"text": "(Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Analyzing Misclassified Sequences",
"sec_num": "3.6"
},
{
"text": "In 14 more cases (Table 5 , V) the experts were unable to identify any distinctive features that caused the sequences to be classified incorrectly by the BERT model. Apart from these technical insights, in 12 cases (Table 5, VI) weaknesses in the training data set were identified, such as incorrect assignments of the actual label previously made by the human coder, sequences composed by clients rather than counselors, or sequences that only contain single characters.",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 25,
"text": "(Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Analyzing Misclassified Sequences",
"sec_num": "3.6"
},
{
"text": "Furthermore, in a total of nine sequences (Table 5 , VII+VIII), the experts declared the sequences as \"hard to assign for humans\" due to the usage of uncommon words, not enough context, or since the sequence consists of multiple sentences with multiple categories.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 51,
"text": "(Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Analyzing Misclassified Sequences",
"sec_num": "3.6"
},
{
"text": "To estimate the impact of the interpretation options during the classification regarding the evaluation metrics, an adjusted accuracy can be estimated. This adjusted accuracy is calculated by transferring the proportion of analyzed incorrectly classified sequences that are not inherently wrong (Table 5 , I+II) to the total of the 747 incorrectly classified sequences. This means that 58.1% of the originally incorrectly classified sequences can be considered as correct. This leads to an increase of the correctly classified sequences from 2325 to 2759 which corresponds to a more than satisfying accuracy of 90%, respectively. Since this is only an overall estimation, adjusted F1 scores cannot be calculated.",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 303,
"text": "(Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Analyzing Misclassified Sequences",
"sec_num": "3.6"
},
{
"text": "To understand the influence of the availability of training samples, we ran multiple tests in which the number of training samples in a specific category was reduced. Hereby, we tested all categories that achieve an F1 score of 70% or higher. For each of the categories, six models were trained with a restricted number (10, 20, 50, 100, 250, and 500) of randomly selected training samples. All models were then evaluated on our test data set. Results have shown that simple categories, such as \"General salutation\", \"Familiar salutation (without name)\", \"Welcoming\", or \"Follow-up question\", only require about 50 training samples to achieve F1 scores of 0.71 or higher. However, categories that contain text sequences with more complex structures, such as \"Experience / Explanation / Example\" or \"Recommendation for action\", still show significant improvements when using 250, 500, or all available text sequences for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion about Improving the Model",
"sec_num": "3.7"
},
{
"text": "As described in Section 2, our training data set is unevenly distributed. Data set imbalance is a well-known problem in machine learning (He and Garcia, 2009) and in our case is due to the annotation process. Hereby, available forum posts were annotated without specifically having the category distribution in mind. Typical techniques to reduce the data set imbalance, such as random oversampling or synthetic sampling with data generation (He and Garcia, 2009) , cannot easily be applied to textual data, especially not when precise phrasing and wording is important for the classification as in our case. One technique that might, however, lead to improvements is generating new text sequences by randomly combining sentences from other sequences of the same category. Other possible approaches such as aggregating categories with few examples to their superset-level were also considered but dismissed since our goal is to predict categories on a detailed level.",
"cite_spans": [
{
"start": 137,
"end": 158,
"text": "(He and Garcia, 2009)",
"ref_id": "BIBREF35"
},
{
"start": 441,
"end": 462,
"text": "(He and Garcia, 2009)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion about Improving the Model",
"sec_num": "3.7"
},
{
"text": "With the approximate number of required samples per category, we think that manually creating additional training data in especially underrepresented classes and edge-cases will, therefore, help to improve the model in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion about Improving the Model",
"sec_num": "3.7"
},
{
"text": "Another idea to improve the model is by taking the model's first and second prediction into account. Human coders can then be supported with suggestions by the model during coding tasks and choose the best fitting label. This feedback can then be used to further improve the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion about Improving the Model",
"sec_num": "3.7"
},
{
"text": "Coding of text passages is to some degree dependent on the subjective perception of the coders. Especially for similar categories like \"Empathy\" and \"Compassion\", different coders will sometimes assign different labels to the same text. Thus, even human coders which were trained on the usage of the codebook will not reach 100% agreement. To get a better understanding of the applicability of our model for automatic coding, we compared the coding performance of BERT against a trained human coder familiar with the codebook (\"expert\") and an untrained human coder (\"novice\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT vs. Human Coders",
"sec_num": "4"
},
{
"text": "The degree of consensus among coders, the intercoder reliability, is often measured by Cohen's \u03ba (kappa) coefficient (Cohen 1960 , Burla et al. 2008 . The maximum value of \u03ba is 1, \u03ba > 0.8 indicates almost perfect, and \u03ba > 0.6 indicates substantial agreement.",
"cite_spans": [
{
"start": 117,
"end": 128,
"text": "(Cohen 1960",
"ref_id": "BIBREF14"
},
{
"start": 129,
"end": 148,
"text": ", Burla et al. 2008",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intercoder Reliability between Experts",
"sec_num": "4.1"
},
{
"text": "During the creation of the training data, our experts regularly coded the same texts and aligned their coding style. After coding was finished, we calculated the \u03ba coefficient between those two coders who had coded the most samples. Thereby, we considered only posts coded by both coders and text sequences with at least 75% overlap regarding the first and last word. We determined a \u03ba coefficient of 0.73 between those two experts. This value is relatively high given our complex codebook with over 50 categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intercoder Reliability between Experts",
"sec_num": "4.1"
},
{
"text": "To understand how our BERT model performs compared to human coders, we benchmarked the performance of the following three participants:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intercoder Reliability between an Expert, a Novice, and BERT",
"sec_num": "4.2"
},
{
"text": "The expert was one of the coders observed in the intercoder reliability measurement. The novice had only a little experience in text annotation and had just recently familiarized herself with the codebook and typical examples for each category. The third participant was our BERT classification model. All participants had the task to annotate the same 50 text passages. Each text passage was randomly chosen from the set of previously unlabeled forum posts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intercoder Reliability between an Expert, a Novice, and BERT",
"sec_num": "4.2"
},
{
"text": "Besides measuring the intercoder reliability among the participants, we also wanted to generate indications about which sequence length is best suited for the application of the BERT model. For typical coding tasks in the social sciences, the length of a sequence to be coded is defined by a change in the occurring category. This contrasts with most machine-based classifiers which expect a defined sequence of words as input. The choice of start and end for a label in continuous text is usually not part of the classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intercoder Reliability between an Expert, a Novice, and BERT",
"sec_num": "4.2"
},
{
"text": "Therefore, we generated three variants of the 50 text sequences for coding: The first data set consists of single sentences only, the second data set includes, if existing, the following sentence for each sample, and the third data set contains sequences of at most three consecutive sentences. Figure 4 illustrates the breakdown of an exemplary post.",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 303,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Intercoder Reliability between an Expert, a Novice, and BERT",
"sec_num": "4.2"
},
{
"text": "All three data sets were then coded independently by the participants. As before, the agreement between the different coders was measured using the \u03ba coefficient (see Table 6 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 174,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Intercoder Reliability between an Expert, a Novice, and BERT",
"sec_num": "4.2"
},
{
"text": "Surprisingly, the intercoder reliability between BERT and the human expert is higher than the intercoder reliability between the expert and the novice, regardless of the sequence length. In its best case, the BERT classifier achieves nearly expertexpert-like intercoder reliability with a value as high as 0.64 in comparison to the earlier calculated expert intercoder reliability of 0.73. It seems that the BERT model has learned the expert style of Figure 4 : Exemplary structure of the sequences within the different data sets coding from the training data better than an untrained human coder using the codebook.",
"cite_spans": [],
"ref_spans": [
{
"start": 451,
"end": 459,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Intercoder Reliability between an Expert, a Novice, and BERT",
"sec_num": "4.2"
},
{
"text": "While classifying sequences that contain only one sentence was rated difficult by the human coders due to the missing context, sequences with up to 3 sentences were rated as too long since they often contained patterns from multiple categories. Therefore, sequences with the length of two sentences were rated as best fitting lengths for classifying sequences by both the novice and the expert coder. In contrast to the ratings of the coders, the intercoder reliability shows the highest values when encoding sequences with the length of only one sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intercoder Reliability between an Expert, a Novice, and BERT",
"sec_num": "4.2"
},
{
"text": "It has been shown that machine-based classifiers can reach human-like performance for the annotation of complex categories in psycho-social texts. The results indicate that the models learn to mimic the coding style of the initial creators of the training data. The trained BERT model was even better in coding than a human novice. As in other areas of machine learning, this bears the risk that a model also learns the bias from the training data. Therefore, it is important to understand and regularly check the decisions of the model by human experts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "High coding quality could not be achieved for all codes, however. Especially underrepresented categories, which are common in social sciences, are problematic. Thus, a sufficient number of training samples is an obvious prerequisite for good results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The typical approach of social sciences in analyzing text corpora consists of coding one text after the other and ignoring unequal frequencies of the individual codes. Our study shows that when using machine learning methods, it is better to generate training examples for as many categories as possible and pay less attention to the complete coding of individual texts. This is an important finding for the organization of future studies in this field.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The investigation of misclassified sequences showed that many recorded misclassifications actually were minor mistakes. The model frequently chose not the actual but a very similar category such that even human experts would regard the assignment plausible. Thus, codes with very similar meanings must be distinguished more sharply to give the model a chance to learn to differentiate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The analysis of the misclassified sequences of BERT opens up new perspectives for the social sciences: More than half of the \"incorrectly classified sequences\" appeared to the human expert to be plausible or at least worthy of consideration. Since the discussion of the understanding of individual text passages is an important element of social science research, such plausible misinterpretations can enrich the research process. They offer an alternative way of looking at reality and force the human coder to either rethink his assessments or to better justify them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Currently, we are working on improving the classification performance. One approach is the generation of additional training data for underrepresented categories. Another idea is using an ensemble of SVM and BERT as a classifier to better utilize the individual strengths of the different models. In any case, the findings on how the models work and perform help to consider such technical aspects in future social science research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "With regard to the application domain, we can conclude that it is definitely possible to analyze online counseling conversations with the help of machine learning. We intend to use machine learning in future research projects to investigate correlations between the different techniques used by counselors and the characteristics and reactions of clients. In addition to the question of whether successful counselors use certain techniques significantly more often than others, it can now be clarified if certain approaches are particularly promising for certain target groups or specific problems. These findings can be integrated into the education of online counselors. Furthermore, assistance systems are conceivable that support online counselors in real-time with information generated from this data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In any case, the results of this study have shown that it is possible to merge the advantages of qualitative and quantitative approaches in social science with the help of machine learning. Automated data annotation for qualitative analysis is the cornerstone for future insights on an unprecedented level. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Quantifying Attention Flow in Transformers",
"authors": [
{
"first": "S",
"middle": [],
"last": "Abnar",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abnar, S. and Zuidema, W. (2020) Quantifying Atten- tion Flow in Transformers [Online].",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Machine Learning for Text",
"authors": [
{
"first": "C",
"middle": [
"C"
],
"last": "Aggarwal",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-3-319-73531-3"
]
},
"num": null,
"urls": [],
"raw_text": "Aggarwal, C. C. (2018) Machine Learning for Text, Springer [Online]. https://doi.org/10.1007/978-3- 319-73531-3.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Largescale Analysis of Counseling Conversations: An Application of Natural Language Processing to Mental Health",
"authors": [
{
"first": "T",
"middle": [],
"last": "Althoff",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Althoff, T., Clark, K. and Leskovec, J. (2016) Large- scale Analysis of Counseling Conversations: An Application of Natural Language Processing to Mental Health [Online].",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Online counselling: The experience of counsellors providing synchronous single-session counselling to young people",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bambling",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Reid",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Wegner",
"suffix": ""
}
],
"year": 2008,
"venue": "Counselling and Psychotherapy Research",
"volume": "8",
"issue": "2",
"pages": "110--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bambling, M., King, R., Reid, W. & Wegner, K. (2008) Online counselling: The experience of counsellors providing synchronous single-session counselling to young people, Counselling and Psychotherapy Research, vol. 8, no. 2, pp.110-116 [Online].",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "From Text to Codings: Intercoder Reliability Assessment in Qualitative Content Analysis",
"authors": [
{
"first": "L",
"middle": [],
"last": "Burla",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Knierim",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Barth",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Liewald",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Duetz",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Abel",
"suffix": ""
}
],
"year": 2008,
"venue": "Nursing research",
"volume": "57",
"issue": "2",
"pages": "113--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burla, L., Knierim, B., Barth, J., Liewald, K., Duetz, M. and Abel, T. (2008) From Text to Codings: Inter- coder Reliability Assessment in Qualitative Content Analysis, Nursing research, vol. 57, no. 2, pp. 113- 117 [Online].",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Towards a chatbot for digital counselling",
"authors": [
{
"first": "G",
"middle": [],
"last": "Cameron",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Cameron",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Megaw",
"suffix": ""
},
{
"first": "R",
"middle": [
"B"
],
"last": "Bond",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mulvenna",
"suffix": ""
},
{
"first": "S",
"middle": [
"B"
],
"last": "O'neill",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International BCS Human Computer Interaction Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cameron, G., Cameron, D. M., Megaw, G., Bond, R. B., Mulvenna, M., O'Neill, S. B. et al. (2017) To- wards a chatbot for digital counselling, Proceed- ings of the 31st International BCS Human Com- puter Interaction Conference (HCI 2017). BCS Learning & Development [Online].",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Counseling activity in single-session online counseling with adolescents: An adherence study",
"authors": [
{
"first": "L",
"middle": [],
"last": "Chardon",
"suffix": ""
},
{
"first": "K",
"middle": [
"S"
],
"last": "Bagraith",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "King",
"suffix": ""
}
],
"year": 2011,
"venue": "Psychotherapy Research",
"volume": "21",
"issue": "5",
"pages": "583--592",
"other_ids": {
"DOI": [
"10.1080/10503307.2011.592550"
]
},
"num": null,
"urls": [],
"raw_text": "Chardon, L., Bagraith, K. S. & King, R. J. (2011) Counseling activity in single-session online coun- seling with adolescents: An adherence study, Psy- chotherapy Research, vol. 21, no.5, pp. 583-592 [Online]. https://doi.org/10.1080/10503307.2011.59 2550.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "SemEval-2019 Task 3: EmoContext -Contextual Emotion Detection in Text",
"authors": [
{
"first": "A",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "K",
"middle": [
"N"
],
"last": "Narahari",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Agrawal",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation (SemEval-2019)",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chatterjee, A., Narahari, K.N., Joshi, M., and Agrawal, P. (2019) SemEval-2019 Task 3: EmoContext - Contextual Emotion Detection in Text. In: Proceed- ings of the 13th International Workshop on Seman- tic Evaluation (SemEval-2019), pp. 39-48 [Online].",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Coefficient of Agreement for Nominal Scales",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "Educ Psychol Meas",
"volume": "20",
"issue": "",
"pages": "37--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen J. (1960) A Coefficient of Agreement for Nom- inal Scales. In: Educ Psychol Meas. 20:37-46 [Online].",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unsupervised Cross-lingual Representation Learning at Scale",
"authors": [
{
"first": "A",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzm\u00e1n, F., Grave, E., Ott, M., Zet- tlemoyer, L. and Stoyanov, V. (2019) Unsupervised Cross-lingual Representation Learning at Scale [Online]. http://dx.doi.org/10.18653/v1/2020.acl- main.747.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Machine learning and rule-based automated coding of qualitative data",
"authors": [
{
"first": "K",
"middle": [],
"last": "Crowston",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "E",
"middle": [
"E"
],
"last": "Allen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. Am. Soc. Info. Sci. Tech",
"volume": "47",
"issue": "",
"pages": "1--2",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Crowston, K., Liu, X. and Allen, E.E. (2010) Machine learning and rule-based automated coding of quali- tative data, Proc. Am. Soc. Info. Sci. Tech., vol. 47, pp. 1-2 [Online].",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Sentence Multilingual BERT",
"authors": [
{
"first": "",
"middle": [],
"last": "Deeppavlov",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "DeepPavlov (n. d.) Sentence Multilingual BERT [Online]. https://huggingface.co/DeepPavlov/bert- base-multilingual-cased-sentence.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.-W., Lee, K. and Toutanova, K. (2018) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding [Online]. http://dx.doi.org/10.18653/v1/N19-1423.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Experiences of counsellors providing online chat counselling to young people",
"authors": [
{
"first": "M",
"middle": [
"J"
],
"last": "Dowling",
"suffix": ""
},
{
"first": "D",
"middle": [
"J"
],
"last": "Rickwood",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Psychologists and Counsellors in Schools",
"volume": "24",
"issue": "2",
"pages": "183--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dowling, M. J. & Rickwood, D. J. (2014) Experiences of counsellors providing online chat counselling to young people, Journal of Psychologists and Coun- sellors in Schools, vol. 24, no.2, pp. 183-196. Cam- bridge University Press, Cambridge, UK [Online].",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Investigating individual online synchronous chat counselling processes and treatment outcomes for young people",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dowling",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Rickwood",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Mental Health",
"volume": "12",
"issue": "3",
"pages": "216--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dowling, M. & Rickwood, D. (2015) Investigating in- dividual online synchronous chat counselling pro- cesses and treatment outcomes for young people, Advances in Mental Health, vol. 12, no. 3, pp. 216- 224 [Online].",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Using text-based synchronous chat to offer therapeutic support to students: A systematic review of the research literature",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Ersahin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hanley",
"suffix": ""
}
],
"year": 2017,
"venue": "Health Education Journal",
"volume": "76",
"issue": "5",
"pages": "531--543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ersahin, Z. & Hanley, T. (2017) Using text-based syn- chronous chat to offer therapeutic support to stu- dents: A systematic review of the research litera- ture, Health Education Journal, vol. 76, no. 5, pp. 531-543 [Online].",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Children's experiences with chat support and telephone support",
"authors": [
{
"first": "R",
"middle": [
"G"
],
"last": "Fukkink",
"suffix": ""
},
{
"first": "J",
"middle": [
"M A"
],
"last": "Hermanns",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Child Psychology and Psychiatry",
"volume": "50",
"issue": "6",
"pages": "759--766",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fukkink, R. G. & Hermanns, J. M. A. (2009) Chil- dren's experiences with chat support and telephone support, Journal of Child Psychology and Psychia- try, vol. 50, no. 6, pp.759-766",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Hello! I know you help people here, right?\": A qualitative study of young people's acted motivations in textbased counseling",
"authors": [
{
"first": "F",
"middle": [
"M"
],
"last": "Gatti",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Brivio",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Calciano",
"suffix": ""
}
],
"year": 2016,
"venue": "Children and Youth Services Review",
"volume": "71",
"issue": "",
"pages": "27--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gatti, F. M., Brivio, E. & Calciano, S. (2016), \"Hello! I know you help people here, right?\": A qualitative study of young people's acted motivations in text- based counseling, Children and Youth Services Re- view, vol. 71, pp. 27-35 [Online].",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Discovery of grounded theory: Strategies for qualitative research",
"authors": [
{
"first": "B",
"middle": [
"G"
],
"last": "Glaser",
"suffix": ""
},
{
"first": "A",
"middle": [
"L"
],
"last": "Strauss",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glaser, B. G. & Strauss, A. L. (2017). Discovery of grounded theory: Strategies for qualitative research. Routledge.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Understanding the online therapeutic alliance through the eyes of adolescent service users",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hanley",
"suffix": ""
}
],
"year": 2012,
"venue": "Counselling and Psychotherapy Research",
"volume": "12",
"issue": "1",
"pages": "35--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanley, T. (2012) Understanding the online therapeutic alliance through the eyes of adolescent service us- ers, Counselling and Psychotherapy Research, vol. 12, no.1, pp. 35-43 [Online].",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Learning from Imbalanced Data",
"authors": [
{
"first": "H",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "E",
"middle": [
"A"
],
"last": "Garcia",
"suffix": ""
}
],
"year": 2009,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "21",
"issue": "9",
"pages": "1263--1284",
"other_ids": {
"DOI": [
"10.1109/TKDE.2008.239"
]
},
"num": null,
"urls": [],
"raw_text": "He, H. and Garcia, E. A. (2009) Learning from Imbal- anced Data, IEEE Transactions on Knowledge and Data Engineering, vol. 21, no. 9, pp. 1263-1284 [Online]. https://doi.org/10.1109/TKDE.2008.239.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Universal Language Model Fine-tuning for Text Classification",
"authors": [
{
"first": "J",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Howard, J. and Ruder, S. (2018) Universal Language Model Fine-tuning for Text Classification [Online].",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Hugging Face (2020) BertForSequenceClassification",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugging Face (2020) BertForSequenceClassification [Online].",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Qualitative Bildungsforschung -Ein integrativer Ansatz",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kergel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wiesbaden",
"suffix": ""
},
{
"first": "V",
"middle": [
"S"
],
"last": "Springer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-3-658-18587-9"
]
},
"num": null,
"urls": [],
"raw_text": "Kergel D. (2018) Qualitative Bildungsforschung -Ein integrativer Ansatz. Wiesbaden, Springer VS [On- line]. https://doi.org/10.1007/978-3-658-18587- 9.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Telephone and online counselling for young people: A naturalistic comparison of session outcome, session impact and therapeutic alliance",
"authors": [
{
"first": "R",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bambling",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Reid",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 2006,
"venue": "Counselling and Psychotherapy Research",
"volume": "6",
"issue": "3",
"pages": "175--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "King, R., Bambling, M., Reid, W. & Thomas, I. (2006) Telephone and online counselling for young people: A naturalistic comparison of session outcome, ses- sion impact and therapeutic alliance, Counselling and Psychotherapy Research, vol. 6, no. 3, pp. 175- 181 [Online].",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Cross-lingual Language Model Pretraining",
"authors": [
{
"first": "G",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lample, G. and Conneau, A. (2019) Cross-lingual Lan- guage Model Pretraining [Online].",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Establishing working relationships in online social work",
"authors": [
{
"first": "G",
"middle": [],
"last": "Van De Luitgaarden",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Van Der Tier",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Social Work",
"volume": "18",
"issue": "3",
"pages": "307--325",
"other_ids": {
"DOI": [
"10.1177/1468017316654347"
]
},
"num": null,
"urls": [],
"raw_text": "van de Luitgaarden, G. & van der Tier, M. (2018) Es- tablishing working relationships in online social work, Journal of Social Work, vol.18 no.3, pp. 307- 325 [Online]. https://doi.org/10.1177/14680173166543 47.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Approaches to Qualitative Research in Mathematics Education: Examples of Methodology and Methods. Dordrecht",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mayring",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "365--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mayring, P. (2015) Qualitative Content Analysis: The- oretical Background and Procedures (Advances in Mathematics Education), in A. Bikner-Ahsbahs, C. Knipping & N. Presmeg (eds), Approaches to Qual- itative Research in Mathematics Education: Exam- ples of Methodology and Methods. Dordrecht, Springer Netherlands, pp. 365-380 [Online].",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Exploring Young People's Perceptions of the Effectiveness of Text-Based Online Counseling: Mixed Methods Pilot Study",
"authors": [
{
"first": "P",
"middle": [],
"last": "Navarro",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bambling",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sheffield",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Edirippulige",
"suffix": ""
}
],
"year": 2019,
"venue": "JMIR Mental Health",
"volume": "6",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navarro, P., Bambling, M., Sheffield, J. & Edirippulige, S. (2019) Exploring Young People's Perceptions of the Effectiveness of Text-Based Online Counseling: Mixed Methods Pilot Study, JMIR Mental Health, vol. 6, no. 7, e13152 [Online].",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Single session email consultation for parents: an evaluation of its effect on empowerment",
"authors": [
{
"first": "C",
"middle": [
"C"
],
"last": "Nieuwboer",
"suffix": ""
},
{
"first": "R",
"middle": [
"G"
],
"last": "Fukkink",
"suffix": ""
},
{
"first": "J",
"middle": [
"M A"
],
"last": "Hermanns",
"suffix": ""
}
],
"year": 2015,
"venue": "British Journal of Guidance & Counselling",
"volume": "43",
"issue": "1",
"pages": "131--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nieuwboer, C. C., Fukkink, R. G. & Hermanns, J. M. A. (2015) Single session email consultation for par- ents: an evaluation of its effect on empowerment, British Journal of Guidance & Counselling, vol. 43, no. 1, pp. 131-143 [Online].",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "What Makes a Good Counselor? Learning to Distinguish between High-quality and Low-quality Counseling Conversations",
"authors": [
{
"first": "V",
"middle": [],
"last": "P\u00e9rez-Rosas",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Resnicow",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "926--935",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P\u00e9rez-Rosas, V., Wu, X., Resnicow, K. and Mihalcea, R. (2019) What Makes a Good Counselor? Learning to Distinguish between High-quality and Low-qual- ity Counseling Conversations, Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics. Florence, Italy. Stroudsburg, PA, USA, Association for Computational Linguis- tics, pp. 926-935 [Online].",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Why Should I Trust You?\": Explaining the Predictions of Any Classifier",
"authors": [
{
"first": "M",
"middle": [
"T"
],
"last": "Ribeiro",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ribeiro, M. T., Singh, S. and Guestrin, C. (2016) \"Why Should I Trust You?\": Explaining the Predictions of Any Classifier [Online].",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Single session webbased counselling: a thematic analysis of content from the perspective of the client",
"authors": [
{
"first": "S",
"middle": [
"N"
],
"last": "Rodda",
"suffix": ""
},
{
"first": "D",
"middle": [
"I"
],
"last": "Lubman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Cheetham",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Dowling",
"suffix": ""
},
{
"first": "A",
"middle": [
"C"
],
"last": "Jackson",
"suffix": ""
}
],
"year": 2015,
"venue": "British Journal of Guidance & Counselling",
"volume": "43",
"issue": "1",
"pages": "117--130",
"other_ids": {
"DOI": [
"10.1080/03069885.2014.938609"
]
},
"num": null,
"urls": [],
"raw_text": "Rodda, S. N., Lubman, D. I., Cheetham, A., Dowling, N. A. & Jackson, A. C. (2015) Single session web- based counselling: a thematic analysis of content from the perspective of the client, British Journal of Guidance & Counselling, vol. 43, no.1, pp. 117- 130 [Online]. https://doi.org/10.1080/03069885.2014.9 38609.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "V",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanh, V., Debut, L., Chaumond, J. and Wolf, T. (2019) DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter [Online].",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Examining the complexities of measuring effectiveness of online counselling for young people using routine evaluation data",
"authors": [
{
"first": "A",
"middle": [],
"last": "Sefi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hanley",
"suffix": ""
}
],
"year": 2012,
"venue": "Pastoral Care in Education",
"volume": "30",
"issue": "1",
"pages": "49--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sefi, A. & Hanley, T. (2012) Examining the complexi- ties of measuring effectiveness of online counsel- ling for young people using routine evaluation data, Pastoral Care in Education, vol. 30, no. 1, pp. 49- 64 [Online].",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Attention Is All You Need",
"authors": [
{
"first": "A",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "A",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. and Polosukhin, I. (2017) Attention Is All You Need [Online].",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "HuggingFace's Transformers: State-of-the-art Natural Language Processing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolf, T., e.a. (2019) HuggingFace's Transformers: State-of-the-art Natural Language Processing [Online]. http://arxiv.org/pdf/1910.03771v5.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Illustration of the codebook with an exemplary breakdown of the categories Example of three labeled sequences. The original texts are in German.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Text heatmaps highlighting the determining words for the classification decision",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF3": {
"text": "Evaluation metrics on the test data set trained uncased language model of the Bavarian State Library (DBMDZ, 2019). The three multilingual models achieved weighted F1 scores as high as 71.0% with the pre-trained language model by DeepPavlov (DeepPavlov, n. d.).All of the following analyses are, therefore, based on the best performing DBMDZ BERT model.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF4": {
"text": "Since predictions of BERT, or Transformer models in general, are often untransparent and difficult to",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Category</td><td colspan=\"2\">F1 score SVM BERT</td><td>Support (Training)</td></tr><tr><td>Other introduction</td><td>27.3%</td><td>11.8%</td><td>37</td></tr><tr><td>Activation of</td><td/><td/><td/></tr><tr><td>resources</td><td>43.2%</td><td>42.1%</td><td>49</td></tr><tr><td>(professional level)</td><td/><td/><td/></tr><tr><td>Wish</td><td>63.8%</td><td>75.9%</td><td>80</td></tr><tr><td>Empathy for others</td><td>59.8%</td><td>49.5%</td><td>118</td></tr><tr><td>Evaluation /</td><td/><td/><td/></tr><tr><td>understanding /</td><td>59.0%</td><td>67.0%</td><td>1136</td></tr><tr><td>calming</td><td/><td/><td/></tr><tr><td>Experience /</td><td/><td/><td/></tr><tr><td>explanation /</td><td>76.2%</td><td>83.1%</td><td>1398</td></tr><tr><td>example</td><td/><td/><td/></tr></table>"
},
"TABREF5": {
"text": "Extract of the classification report",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Classification Model</td><td>Weighted F1 score</td></tr><tr><td>SVM (baseline)</td><td>68.0%</td></tr><tr><td>BERT (best model)</td><td>74.4%</td></tr><tr><td>DistilBERT</td><td>70.4%</td></tr><tr><td>XLM-RoBERTa</td><td>70.5%</td></tr><tr><td>XLM</td><td>65.1%</td></tr><tr><td>ULMFit</td><td>71.2%</td></tr></table>"
},
"TABREF6": {
"text": "Weighted F1 scores of all evaluated classification models",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF8": {
"text": "Expert assessment of incorrectly classified text sequences",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}