ACL-OCL / Base_JSON /prefixW /json /wanlp /2021.wanlp-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:59:11.864929Z"
},
"title": "Empathetic BERT2BERT Conversational Model: Learning Arabic Language Generation with Little Data",
"authors": [
{
"first": "Tarek",
"middle": [],
"last": "Naous",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "American University of Beirut Beirut",
"location": {
"addrLine": "Lebanon {tnn11, wfa07,ram79"
}
},
"email": ""
},
{
"first": "Wissam",
"middle": [],
"last": "Antoun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "American University of Beirut Beirut",
"location": {
"addrLine": "Lebanon {tnn11, wfa07,ram79"
}
},
"email": ""
},
{
"first": "Reem",
"middle": [
"A"
],
"last": "Mahmoud",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "American University of Beirut Beirut",
"location": {
"addrLine": "Lebanon {tnn11, wfa07,ram79"
}
},
"email": ""
},
{
"first": "Hazem",
"middle": [],
"last": "Hajj",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "American University of Beirut Beirut",
"location": {
"addrLine": "Lebanon {tnn11, wfa07,ram79"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Enabling empathetic behavior in Arabic dialogue agents is an important aspect of building human-like conversational models. While Arabic Natural Language Processing has seen significant advances in Natural Language Understanding (NLU) with language models such as AraBERT, Natural Language Generation (NLG) remains a challenge. The shortcomings of NLG encoder-decoder models are primarily due to the lack of Arabic datasets suitable to train NLG models such as conversational agents. To overcome this issue, we propose a transformer-based encoder-decoder initialized with AraBERT parameters. By initializing the weights of the encoder and decoder with AraBERT pre-trained weights, our model was able to leverage knowledge transfer and boost performance in response generation. To enable empathy in our conversational model, we train it using the ArabicEmpatheticDialogues dataset and achieve high performance in empathetic response generation. Specifically, our model achieved a low perplexity value of 17.0 and an increase in 5 BLEU points compared to the previous state-of-the-art model. Also, our proposed model was rated highly by 85 human evaluators, validating its high capability in exhibiting empathy while generating relevant and fluent responses in open-domain settings.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Enabling empathetic behavior in Arabic dialogue agents is an important aspect of building human-like conversational models. While Arabic Natural Language Processing has seen significant advances in Natural Language Understanding (NLU) with language models such as AraBERT, Natural Language Generation (NLG) remains a challenge. The shortcomings of NLG encoder-decoder models are primarily due to the lack of Arabic datasets suitable to train NLG models such as conversational agents. To overcome this issue, we propose a transformer-based encoder-decoder initialized with AraBERT parameters. By initializing the weights of the encoder and decoder with AraBERT pre-trained weights, our model was able to leverage knowledge transfer and boost performance in response generation. To enable empathy in our conversational model, we train it using the ArabicEmpatheticDialogues dataset and achieve high performance in empathetic response generation. Specifically, our model achieved a low perplexity value of 17.0 and an increase in 5 BLEU points compared to the previous state-of-the-art model. Also, our proposed model was rated highly by 85 human evaluators, validating its high capability in exhibiting empathy while generating relevant and fluent responses in open-domain settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Conversational models with empathetic responding capabilities are crucial in making human-machine interactions closer to human-human interactions, as they can lead to increased engagement, more trust, and reduced frustration (Yal\u00e7\u0131n and DiPaola, 2018) . These characteristics are highly desirable in open-domain conversational models as they can boost user satisfaction and make chatbots look less boorish. While empathy can be attributed to a range of behaviors, it can be generally described as the innate human capacity of relating to another person's feelings and making sense of their emotional state (Yal\u00e7\u0131n, 2020) . An important factor towards developing human-like dialogue agents is enabling their empathetic capability (Huang et al., 2020) . To this end, there has been a significant interest in developing empathetic conversational models (Majumder et al., 2020; Sharma et al., 2020; Ma et al., 2020; Yal\u00e7\u0131n and DiPaola, 2019) . These models infer the emotions of a human user and provide a suitable empathetic response. The desired behavior of an empathetic conversational agent is illustrated in Figure 1 , where the empathetic agent recognizes that the user is feeling proud and, thus, generates an empathetic response that congratulates the user with enthusiasm.",
"cite_spans": [
{
"start": 225,
"end": 251,
"text": "(Yal\u00e7\u0131n and DiPaola, 2018)",
"ref_id": "BIBREF26"
},
{
"start": 606,
"end": 620,
"text": "(Yal\u00e7\u0131n, 2020)",
"ref_id": "BIBREF25"
},
{
"start": 729,
"end": 749,
"text": "(Huang et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 850,
"end": 873,
"text": "(Majumder et al., 2020;",
"ref_id": null
},
{
"start": 874,
"end": 894,
"text": "Sharma et al., 2020;",
"ref_id": "BIBREF20"
},
{
"start": 895,
"end": 911,
"text": "Ma et al., 2020;",
"ref_id": "BIBREF13"
},
{
"start": 912,
"end": 937,
"text": "Yal\u00e7\u0131n and DiPaola, 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 1109,
"end": 1117,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent work in open-domain empathetic conversational models have adopted neural-based sequence generation approaches (Rashkin et al., 2019) . These approaches are based on encoderdecoder neural network architectures such as Sequence-to-Sequence (Seq2Seq) recurrent neural network models or transformers (Lin et al., 2020) . Despite the significant work done in this direction, the focus so far has been mostly on the English language with fewer efforts being directed towards low-resource languages, such as Arabic. The first dataset for Arabic utterances and empathetic responses was recently introduced by Naous et al. (2020) , where a Bidirectional Long Short-Term Memory (Bi-LSTM) Seq2Seq model was trained on the dataset. However, the model proposed by Naous et al. (2020) delivered suboptimal performance due to the limited size of the dataset. The additional challenges in developing neural-based empathetic conversational models for Arabic is the lack of open-domain conversational data that can be used for pre-training (Li et al., 2017) , and thus no availability of pre-trained conversational models that can be used directly for fine-tuning (Zhang et al., 2020b) .",
"cite_spans": [
{
"start": 117,
"end": 139,
"text": "(Rashkin et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 303,
"end": 321,
"text": "(Lin et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 608,
"end": 627,
"text": "Naous et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 758,
"end": 777,
"text": "Naous et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 1029,
"end": 1046,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 1153,
"end": 1174,
"text": "(Zhang et al., 2020b)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address the challenges of small dataset size and lack of conversational resources, in terms of datasets and pre-trained models, we propose a transformer-based encoder-decoder model initialized with AraBERT (Antoun et al., 2020) pretrained weights. Our work extends the English BERT2BERT architecture (Rothe et al., 2020) to Arabic response generation. We fine-tune our proposed model on the limited-sized dataset of empathetic responses in Arabic (Naous et al., 2020) . By using the pre-trained weights of the AraBERT language model to initialize the encoder and decoder, our proposed BERT2BERT model is expected to leverage knowledge transfer and show enhanced performance in empathetic response generation compared to the baseline Bi-LSTM model proposed by Naous et al. (2020) .",
"cite_spans": [
{
"start": 209,
"end": 230,
"text": "(Antoun et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 303,
"end": 323,
"text": "(Rothe et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 450,
"end": 470,
"text": "(Naous et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 762,
"end": 781,
"text": "Naous et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows: Section 2 reviews the recent literature on empathetic conversational models in both English and Arabic. Our proposed BERT2BERT approach for empathetic response generation is presented in Section 3, including the dataset and pre-processing steps. Section 4 analyzes the performance of our model and compares its results to several benchmark models. Concluding remarks and future directions are presented in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The interest in enabling empathy in conversational agents has increased over the last few years with the introduction of the EmpatheticDialogues dataset by Rashkin et al. (2019) . EmpatheticDialogues is a crowdsourced dataset of open-domain conversations where a group of workers was instructed to select an emotion, describe a situation where they have felt that way, and carry out a conversation related to the emotion. The authors used the conversations collected to train retrieval-based and generative-based models, which showed higher levels of empathy in their responses compared with models trained on spontaneous conversational data gathered from the Internet. The release of EmpatheticDialogues (Rashkin et al., 2019) stimulated further research in this area with multiple attempts in the literature to improve the empathetic capability of conversational models. formulated the empathetic responding task as a reinforcement learning problem. The approach named \"Sentiment look-ahead\" employs a Seq2Seq policy model with Gated Recurrent Units to generate an empathetic response based on an input utterance and updates the policy using the REINFORCE method. Lin et al. (2020) fined-tuned a GPT model on the EmpatheticDialogues dataset. The GPT model was pre-trained on the BooksCorpus (Zhu et al., 2015) dataset, improving the NLU capability of the model, as well as on the PersonaChat (Zhang et al., 2018) dataset, allowing the model to have improved performance on response generation.",
"cite_spans": [
{
"start": 156,
"end": 177,
"text": "Rashkin et al. (2019)",
"ref_id": "BIBREF18"
},
{
"start": 1166,
"end": 1183,
"text": "Lin et al. (2020)",
"ref_id": "BIBREF12"
},
{
"start": 1293,
"end": 1311,
"text": "(Zhu et al., 2015)",
"ref_id": "BIBREF31"
},
{
"start": 1394,
"end": 1414,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "English Empathetic Conversational Models",
"sec_num": "2.1"
},
{
"text": "While many works have focused on enabling empathetic capabilities in conversational models for English, there are much fewer attempts to build similar models for Arabic. In general, research on Arabic conversational models is still in its infancy mainly due to the complexity of the language, and the lack of resources and pre-trained models that are available in abundance for English. Despite the availability of Arabic pre-trained language models such as hULMonA (ElJundi et al., 2019) and AraBERT (Antoun et al., 2020) , which have proven useful for Arabic NLU tasks, the lack of pre-trained models for Arabic NLG makes the development of neuralbased Arabic conversational models a challenging task. Hence, existing works on Arabic chatbots have mainly focused on retrieval-based methods (Ali and Habash, 2016) or rule-based approaches (Hijjawi et al., 2014; Fadhil and AbuRa'ed, 2019) . While these approaches work well on task-oriented objectives, they are limited by the size of manually crafted rules they follow or the richness of the database they can retrieve responses from. This ",
"cite_spans": [
{
"start": 501,
"end": 522,
"text": "(Antoun et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 840,
"end": 862,
"text": "(Hijjawi et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 863,
"end": 889,
"text": "Fadhil and AbuRa'ed, 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Empathetic Conversational Models",
"sec_num": "2.2"
},
{
"text": "1 \u2026 2 2 1 \u2026 . . . . . . \u04a7 1 \u04a7 \u2026 \u04a7 2 \u04a7 x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Empathetic Conversational Models",
"sec_num": "2.2"
},
{
"text": "AraBERT-initialized Encoder AraBERT-initialized Decoder Recently, the first empathy-driven Arabic conversational model was proposed by Naous et al. (2020) that released ArabicEmpatheticDialogues, a dataset of Arabic utterances and their corresponding empathetic responses. The authors trained a Seq2Seq model with bidirectional LSTM units on the dataset. While the model succeeded in generating empathetic responses, it showed an average Relevance score which indicates that the responses can sometimes go off-topic and may not be suitable responses for the emotional context of the input utterance. The limitations of this work were mainly due to the limited size of the dataset.",
"cite_spans": [
{
"start": 135,
"end": 154,
"text": "Naous et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Empathetic Conversational Models",
"sec_num": "2.2"
},
{
"text": "In this work, we adopt the BERT2BERT architecture (Rothe et al., 2020) and leverage the pretrained AraBERT (Antoun et al., 2020) model to improve the performance of empathetic Arabic conversational models.",
"cite_spans": [
{
"start": 50,
"end": 70,
"text": "(Rothe et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 107,
"end": 128,
"text": "(Antoun et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Empathetic Conversational Models",
"sec_num": "2.2"
},
{
"text": "3 Proposed Method",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Empathetic Conversational Models",
"sec_num": "2.2"
},
{
"text": "Our proposed model for Arabic empathetic response generation is a transformer-based Seq2Seq model (Vaswani et al., 2017) , which has been shown to boost performance on a several Seq2Seq tasks (Raffel et al., 2020; Lewis et al., 2019) . However, such an architecture would require massive pre-training before being fine-tuned on the desired task (Zhang et al., 2020a) . It was shown by Rothe et al. (2020) that warm-starting the transformerbased encoder-decoder model with the checkpoints of a pre-trained encoder (e.g. BERT) allows the model to deliver competitive results in sequence generation tasks while skipping the costly pretraining. Inspired by this idea, and due to the unavailability of Arabic conversational datasets that can be used for pre-training, we adopt the BERT2BERT architecture (Rothe et al., 2020) , and warm-start the encoder and decoder with the AraBERT checkpoint (Antoun et al., 2020) . The encoder-decoder attention is randomly initialized. The architecture of the proposed model is illustrated in Figure 2 .",
"cite_spans": [
{
"start": 98,
"end": 120,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 192,
"end": 213,
"text": "(Raffel et al., 2020;",
"ref_id": "BIBREF17"
},
{
"start": 214,
"end": 233,
"text": "Lewis et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 345,
"end": 366,
"text": "(Zhang et al., 2020a)",
"ref_id": "BIBREF28"
},
{
"start": 385,
"end": 404,
"text": "Rothe et al. (2020)",
"ref_id": "BIBREF19"
},
{
"start": 799,
"end": 819,
"text": "(Rothe et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 889,
"end": 910,
"text": "(Antoun et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 1025,
"end": 1033,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Proposed BERT2BERT Model",
"sec_num": "3.1"
},
{
"text": "The input to the proposed model is a sequence x = [x 1 , x 2 , . . . , x nx ] of one-hot representations with a length of n x tokens, chosen to be 150. This sequence is fed as input to an AraBERT initialized encoder. At the decoder side, the model generates an empathetic response represented by a sequence y = [y 1 , y 2 , . . . , y ny ], where the maximum output length n y is also specified to be 150. We optimize the log-likelihood loss over the output tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed BERT2BERT Model",
"sec_num": "3.1"
},
{
"text": "To generate empathetic responses from our model, we adopt the Top-K Sampling scheme (Fan et al., 2018) where, at each time step, the model randomly samples the K most likely candidates from the probability distribution of all words in the vocabulary. This decoding strategy has been",
"cite_spans": [
{
"start": 84,
"end": 102,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed BERT2BERT Model",
"sec_num": "3.1"
},
{
"text": "Excited Utterance",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion",
"sec_num": null
},
{
"text": "Furious Utterance Response",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Response Emotion",
"sec_num": null
},
{
"text": "Embarrassed Utterance Response Table 1 : Samples of utterances and empathetic responses from the ArabicEmpatheticDialogues dataset for three emotion labels: Excited, Furious, and Embarrassed found more effective than conventional approaches such as beam search, which tends to yield common responses found repetitively in the training set or similar, slightly-varying versions of the same high-likelihood sequences (Ippolito et al., 2019) . We use the ArabicEmpatheticDialogues dataset (Naous et al., 2020) which was translated from the English version introduced by Rashkin et al. (2019). ArabicEmpathicDialogues contains 36,628 samples of speaker utterances and their corresponding empathetic responses in Arabic. Each sample is also labeled with the emotion of the speaker's utterance. Three examples from the dataset for three different emotion labels are provided in Table 1 . By training a sequence generation model on the samples of utterances and their corresponding responses from the dataset, the model will be able to infer the emotions in input utterances and provide suitable empathetic responses. Thus, the empathetic capability of the model would be enhanced.",
"cite_spans": [
{
"start": 415,
"end": 438,
"text": "(Ippolito et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 486,
"end": 506,
"text": "(Naous et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 31,
"end": 38,
"text": "Table 1",
"ref_id": null
},
{
"start": 872,
"end": 879,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Emotion",
"sec_num": null
},
{
"text": "The dataset is originally labeled with 32 emotion labels, many of which are very similar such as \"joyful\" and \"content\", or \"angry\" and \"furious\". To reduce the number of classes, we follow the tree-structured list of emotions defined by Parrott (2001) to map the 32 emotion labels to their 6 primary emotions which are \"Joy\", \"Surprise\", \"Love\", \"Surprise\", \"Anger\", and \"Fear\". This grouping is shown in Table 2 .",
"cite_spans": [
{
"start": 238,
"end": 252,
"text": "Parrott (2001)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 406,
"end": 413,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Post-Segmentation",
"sec_num": null
},
{
"text": "To reduce lexical sparsity, the utterances and responses in the dataset are segmented using the Farasa segmenter (Abdelali et al., 2016) . Given the morphological complexity of the Arabic language, segmentation is an important pre-processing step that can greatly enhance the performance of neuralbased sequence generation models. An example of this process is shown in Table 3 . By performing segmentation, the vocabulary size is drastically reduced from 47K tokens to around 13K tokens. ",
"cite_spans": [
{
"start": 113,
"end": 136,
"text": "(Abdelali et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 370,
"end": 377,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Post-Segmentation",
"sec_num": null
},
{
"text": "We evaluate the proposed BERT2BERT model in comparison to three benchmark models. We conduct numerical as well as human evaluation of the different conversational models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "We train several neural-based sequence generation models on the ArabicEmpatheticDialogues dataset and consider them as benchmarks for performance comparison. The benchmark models are denoted as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark Models",
"sec_num": "4.1"
},
{
"text": "Baseline: The baseline model, illustrated in Figure 3 , is a Seq2Seq Bi-LSTM model with Attention following the prior state-of-the-art model proposed by Naous et al. (2020) .",
"cite_spans": [
{
"start": 153,
"end": 172,
"text": "Naous et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 45,
"end": 53,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Benchmark Models",
"sec_num": "4.1"
},
{
"text": "EmoPrepend: In this setup, illustrated in Figure 3, we prepend the emotion label to each utterance before feeding it as input to the baseline model described above, and we denote this approach as EmoPrepend. This allows us to add supervised information to the data, without having to introduce any modifications to the architecture. The existing emotion labels have been prepended to the utterances in the train and validation sets. For the test set and at inference, we fine-tune AraBERT",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 48,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Benchmark Models",
"sec_num": "4.1"
},
{
"text": "PPL BLEU Baseline (Naous et al., 2020) for emotion classification using the utterances and their labels in the dataset. The fine-tuned AraBERT model is then used as an external predictor to classify the emotion in the utterance and prepend it as a token before being used as an input to the Emo-Prepend model. We note that the step of grouping emotion labels into 6 main labels, as discussed in Section 3, makes the emotion classification task easier. BERT2BERT-UN: which stands for BERT2BERT-Uninitialized.",
"cite_spans": [
{
"start": 18,
"end": 38,
"text": "(Naous et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "This model is a regular transformer-based encoder-decoder model that shares the same architecture of the BERT2BERT model shown in Figure 2 , but is not initialized with AraBERT pre-trained weights.",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 138,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The proposed BERT2BERT model was developed using the Huggingface transformers library 1 . We train the model for 5 epochs with a batch size of 32 2 . Model training was done on a 16GB V100 NVidia GPU. The Baseline Bi-LSTM Seq2Seq (Naous et al., 2020) , EmoPrepend, and BERT2BERT-UN benchmark models were developed using the Open-NMT Library (Klein et al., 2017) .",
"cite_spans": [
{
"start": 230,
"end": 250,
"text": "(Naous et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 341,
"end": 361,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.2"
},
{
"text": "Dataset Partitioning: All models were trained and evaluated on common data splits of the Ara-bicEmpatheticDialogues. We randomly partitioned the dataset into 90% training, 5% validation, and 5% testing using a seed of 42. Table 4 summarizes the perplexity (PPL) and Bilingual Evaluation Understudy (BLEU) scores for the proposed and benchmark models when evaluated on the test set. It is clear from the numerical evaluation results that the proposed BERT2BERT model consistently outperforms the benchmark models. This is reflected through both a lower PPL score and a higher BLEU score.",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 229,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.2"
},
{
"text": "Empathy Relevance Fluency Baseline (Naous et al., 2020) 2 Table 6 : Examples of responses generated by the BERT2BERT model for multiple utterances with various emotional states and domain contexts.",
"cite_spans": [
{
"start": 35,
"end": 55,
"text": "(Naous et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "With EmoPrepend, the addition of supervised information in the form of prepended emotion labels showed performance improvements in comparison to the Baseline model, reflected by an increase in 2.6 BLEU points and a reduction of 14.5 points in the PPL score. Nevertheless, the PPL score of EmoPrepend at 24.1 is still considered high and could potentially lead to sub-optimal performance. BERT2BERT showed significant performance improvements in comparison to the baseline Seq2Seq Bi-LSTM, highlighted by a much reduced PPL value of 17.0 and an increase in 5 BLEU points. BERT2BERT also achieved better scores than the EmoPrepend model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The BERT2BERT-UN model resulted in a very high PPL score of 158.9 and very low BLEU score of 0.1. These poor results are due to the nature of transformer networks that require huge amounts of data samples to deliver good performance. The initialization of the BERT2BERT with pre-trained AraBERT weights showed very significant enhancements compared with the uninitialized BERT2BERT-UN model. This performance boost provided by the BERT2BERT model is expected given the fact that AraBERT's initialization parameters have been pre-trained on a massive 24 GB Arabic corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The numerical results achieved by the BERT2BERT model are particularly impressive since, despite the limited size of the ArabicEmpa-theticDialogues dataset, BERT2BERT was able to leverage knowledge transfer through fine-tuning to achieve state-of-art performance on the task of open-domain empathetic response generation in Arabic without requiring additional empathetic samples to train on, or pre-training conversational data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Automated metrics such as PPL and BLEU scores are not sufficient alone to evaluate a model's ability to exhibit empathetic behavior. Given the unavailability of specific metrics to evaluate empathy in a conversational model, we resort to evaluation based on the judgment of human subjects. Through human evaluation, we can evaluate the emotional communication capability of the models, which is their ability to recognize emotion in the input utterance and generate a suitable expression of emotion in their corresponding response (Yal\u00e7\u0131n, 2019) . To this end, we conducted a survey to collect ratings from 85 native Arabic speakers. Table 7 : Examples of responses generated by the BERT2BERT model for multiple utterances with neutral emotions.",
"cite_spans": [
{
"start": 531,
"end": 545,
"text": "(Yal\u00e7\u0131n, 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 634,
"end": 641,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.4"
},
{
"text": "The raters were shown various utterances and their corresponding responses generated by the Baseline, EmoPrepend, and BERT2BERT models. The BERT2BERT-UN model was excluded from the survey given its poor results in terms of numerical metrics. The raters were asked to evaluate each of the models' ability to show Empathy, Relevance, and Fluency in the generated response. The raters were asked to answer the following questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generated Response Utterance",
"sec_num": null
},
{
"text": "\u2022 Empathy: Does the generated response show an ability to infer the emotions in the given utterance?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generated Response Utterance",
"sec_num": null
},
{
"text": "\u2022 Relevance: How relevant is the generated response to the input utterance?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generated Response Utterance",
"sec_num": null
},
{
"text": "\u2022 Fluency: How understandable is the generated response? Is it linguistically correct?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generated Response Utterance",
"sec_num": null
},
{
"text": "For each question, the raters were asked to score the responses of the models on a scale of 0 to 5, where 0 reflects extremely poor performance and 5 reflects excellent performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generated Response Utterance",
"sec_num": null
},
{
"text": "The results of the survey are summarized in Table 5, where we report the average of the collected ratings. The EmoPrepend model showed a higher average score of Empathy and Relevance than the Baseline model. However, these scores are below 3, meaning the EmoPrepend model was seen to deliver below-average performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generated Response Utterance",
"sec_num": null
},
{
"text": "On the other hand, the average ratings of the BERT2BERT model can be considered high and are much superior to both the Baseline and the Emo-Prepend models, which indicates BERT2BERT's ability to deliver highly empathetic responses while abiding by linguistic correctness. This is reflected in some examples of the generated responses by BERT2BERT that can be seen in Table 6 . The responses demonstrate the model's ability to express empathetic, relevant, and fluent responses when prompted with input utterances with various emotional states and domain contexts, which also proves its ability to handle open-domain conversations.",
"cite_spans": [],
"ref_spans": [
{
"start": 367,
"end": 374,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generated Response Utterance",
"sec_num": null
},
{
"text": "Despite the promising results achieved by the BERT2BERT model in generating relevant empathetic responses in open-domain settings, it was shown to poorly handle regular chit-chat utterances with neutral emotions, such as \"Hey, how are you?\" or \"What are you doing?\". Instead of providing a regular response, the BERT2BERT model will opt to generate an empathetic response as we show in Table 7 . This issue can be explained by the fact that the model was fine-tuned on a dataset comprised of utterances with pure emotional context and corresponding empathetic responses. Moreso, the AraBERT-initialized parameters did not help mitigate this issue since AraBERT is pre-trained in a self-supervised fashion on news articles and later fine-tuned on a task-specific dataset that does not contain regular chit-chat samples. Thus, it is clear why the BERT2BERT model is not able to handle neutral chit-chat conversations, as it is outside the scope of the training data and the task at hand.",
"cite_spans": [],
"ref_spans": [
{
"start": 386,
"end": 393,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance on Inputs with Neutral Emotional States",
"sec_num": "4.5"
},
{
"text": "In this paper, we address the limitation in resources for Arabic conversational systems, in particular, empathetic conversations. Unlike the English language which has seen great advancements in language generation models due to large corpora and million parameter pre-trained models like GPT, Arabic is considered a low-resource language with limited availability of conversational datasets and pre-trained models for response generation. We propose an empathetic BERT2BERT, a transformer-based model, of which the encoder and decoder are warm-started using AraBERT pre-trained parameters and fine-tuned for Arabic empathetic response generation using the ArabicEm-patheticDialogues dataset. By adopting this transfer learning strategy, the proposed BERT2BERT model was able to address the challenges of building an open-domain neural-based empathetic conversational model for a low resource language such as Arabic. BERT2BERT achieved significant performance improvements in comparison to three benchmark models, a baseline Seq2Seq Bi-LSTM model, a Seq2Seq Bi-LSTM model with prepended supervised information about the emotion label during the training process, and a transformer-based encoder-decoder that is not initialized with pretrained weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The proposed BERT2BERT model achieved a low PPL value of 17.0, a BLEU score of 5.58, and was rated highly by human evaluators with a score of 4.3/5.0, reflecting its ability to generate empathetic, relevant, and fluent responses. Hence, our results show the ability to develop high-performing conversational models in low resource settings by adopting the BERT2BERT strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Despite its high performance in empathetic response generation, BERT2BERT showed a limitation in its ability to handle regular chit-chat conversations with neutral emotional states. To this end, future directions include the development of a strategy that improves the model's ability to determine when an empathetic response is suitable and when it is not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/huggingface/transformers 2 https://github.com/aub-mind/Arabic-Empathetic-Chatbot",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been funded by the University Research Board (URB) at the American University of Beirut (AUB).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Farasa: A fast and furious segmenter for arabic",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Abdelali",
"suffix": ""
},
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Demonstrations",
"volume": "",
"issue": "",
"pages": "11--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed Abdelali, Kareem Darwish, Nadir Durrani, and Hamdy Mubarak. 2016. Farasa: A fast and furious segmenter for arabic. In Proceedings of the 2016 conference of the North American chapter of the as- sociation for computational linguistics: Demonstra- tions, pages 11-16.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Botta: An arabic dialect chatbot",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Dana Abu",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "208--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dana Abu Ali and Nizar Habash. 2016. Botta: An arabic dialect chatbot. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: System Demonstrations, pages 208-212.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "AraBERT: Transformer-based model for arabic language understanding",
"authors": [
{
"first": "Wissam",
"middle": [],
"last": "Antoun",
"suffix": ""
},
{
"first": "Fady",
"middle": [],
"last": "Baly",
"suffix": ""
},
{
"first": "Hazem",
"middle": [],
"last": "Hajj",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection",
"volume": "",
"issue": "",
"pages": "9--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for arabic lan- guage understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Pro- cessing Tools, with a Shared Task on Offensive Lan- guage Detection, pages 9-15.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Wassim El-Hajj, and Khaled Shaban. 2019. hulmona: The universal language model in arabic",
"authors": [
{
"first": "Obeida",
"middle": [],
"last": "Eljundi",
"suffix": ""
},
{
"first": "Wissam",
"middle": [],
"last": "Antoun",
"suffix": ""
},
{
"first": "Nour",
"middle": [
"El"
],
"last": "Droubi",
"suffix": ""
},
{
"first": "Hazem",
"middle": [],
"last": "Hajj",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "68--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Obeida ElJundi, Wissam Antoun, Nour El Droubi, Hazem Hajj, Wassim El-Hajj, and Khaled Shaban. 2019. hulmona: The universal language model in arabic. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 68-77.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "OlloBot -towards a text-based Arabic health conversational agent: Evaluation and results",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Fadhil",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Abura",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "295--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed Fadhil and Ahmed AbuRa'ed. 2019. OlloBot -towards a text-based Arabic health conversational agent: Evaluation and results. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 295-303.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Hierarchical neural story generation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "889--898",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "ArabChat: an arabic conversational agent",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Hijjawi",
"suffix": ""
},
{
"first": "Zuhair",
"middle": [],
"last": "Bandar",
"suffix": ""
},
{
"first": "Keeley",
"middle": [],
"last": "Crockett",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mclean",
"suffix": ""
}
],
"year": 2014,
"venue": "6th International Conference on Computer Science and Information Technology (CSIT)",
"volume": "",
"issue": "",
"pages": "227--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Hijjawi, Zuhair Bandar, Keeley Crockett, and David Mclean. 2014. ArabChat: an arabic con- versational agent. In 2014 6th International Confer- ence on Computer Science and Information Technol- ogy (CSIT), pages 227-237. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Challenges in building intelligent open-domain dialog systems",
"authors": [
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Transactions on Information Systems (TOIS)",
"volume": "38",
"issue": "3",
"pages": "1--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in building intelligent open-domain dia- log systems. ACM Transactions on Information Sys- tems (TOIS), 38(3):1-32.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Comparison of diverse decoding methods from conditional language models",
"authors": [
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": ""
},
{
"first": "Reno",
"middle": [],
"last": "Kriz",
"suffix": ""
},
{
"first": "Joao",
"middle": [],
"last": "Sedoc",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Kustikova",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3752--3762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daphne Ippolito, Reno Kriz, Joao Sedoc, Maria Kustikova, and Chris Callison-Burch. 2019. Com- parison of diverse decoding methods from condi- tional language models. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 3752-3762.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Opennmt: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M Rush. 2017. Opennmt: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ves",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.13461"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dailydialog: A manually labelled multi-turn dialogue dataset",
"authors": [
{
"first": "Yanran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Xiaoyu",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Shuzi",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "986--995",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), pages 986-995.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "CAiRE: an end-to-end empathetic chatbot",
"authors": [
{
"first": "Zhaojiang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Genta",
"middle": [],
"last": "Indra Winata",
"suffix": ""
},
{
"first": "Farhad",
"middle": [],
"last": "Bin Siddique",
"suffix": ""
},
{
"first": "Zihan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jamin",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "13622--13623",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaojiang Lin, Peng Xu, Genta Indra Winata, Farhad Bin Siddique, Zihan Liu, Jamin Shin, and Pascale Fung. 2020. CAiRE: an end-to-end empa- thetic chatbot. In AAAI, pages 13622-13623.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A control unit for emotional conversation generation",
"authors": [
{
"first": "Zhiqiang",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Baoxiang",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Access",
"volume": "8",
"issue": "",
"pages": "43168--43176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiqiang Ma, Rui Yang, Baoxiang Du, and Yan Chen. 2020. A control unit for emotional conversation gen- eration. IEEE Access, 8:43168-43176.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Alexander Gelbukh, Rada Mihalcea, and Soujanya Poria. 2020. Mime: Mimicking emotions for empathetic response generation",
"authors": [
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Shanshan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Jiankun",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Deepanway",
"middle": [],
"last": "Ghosal",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "8968--8979",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navonil Majumder, Pengfei Hong, Shanshan Peng, Jiankun Lu, Deepanway Ghosal, Alexander Gel- bukh, Rada Mihalcea, and Soujanya Poria. 2020. Mime: Mimicking emotions for empathetic re- sponse generation. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 8968-8979.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Empathy-driven arabic conversational chatbot",
"authors": [
{
"first": "Tarek",
"middle": [],
"last": "Naous",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hokayem",
"suffix": ""
},
{
"first": "Hazem",
"middle": [],
"last": "Hajj",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "58--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tarek Naous, Christian Hokayem, and Hazem Hajj. 2020. Empathy-driven arabic conversational chat- bot. In Proceedings of the Fifth Arabic Natural Lan- guage Processing Workshop, pages 58-68.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Emotions in social psychology: Essential readings",
"authors": [
{
"first": "Parrott",
"middle": [],
"last": "W Gerrod",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W Gerrod Parrott. 2001. Emotions in social psychol- ogy: Essential readings. Psychology Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "140",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Towards empathetic opendomain conversation models: A new benchmark and dataset",
"authors": [
{
"first": "Eric",
"middle": [
"Michael"
],
"last": "Hannah Rashkin",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boureau",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5370--5381",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 5370-5381.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Leveraging pre-trained checkpoints for sequence generation tasks",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "264--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for se- quence generation tasks. Transactions of the Asso- ciation for Computational Linguistics, 8:264-280.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A computational approach to understanding empathy expressed in text-based mental health support",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Miner",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Atkins",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Althoff",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5263--5276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Sharma, Adam Miner, David Atkins, and Tim Althoff. 2020. A computational approach to un- derstanding empathy expressed in text-based men- tal health support. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 5263-5276.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Generating empathetic responses by looking ahead the user's sentiment",
"authors": [
{
"first": "Jamin",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Madotto",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2020,
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "7989--7993",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jamin Shin, Peng Xu, Andrea Madotto, and Pas- cale Fung. 2020. Generating empathetic re- sponses by looking ahead the user's sentiment. In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 7989-7993. IEEE.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention is all you need",
"authors": [
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Evaluating empathy in artificial agents",
"authors": [
{
"first": "",
"middle": [],
"last": "Ozge Nilay Yal\u00e7\u0131n",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII)",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ozge Nilay Yal\u00e7\u0131n. 2019. Evaluating empathy in arti- ficial agents. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pages 1-7. IEEE.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Empathy framework for embodied conversational agents. Cognitive Systems Research",
"authors": [
{
"first": "",
"middle": [],
"last": "Ozge Nilay Yal\u00e7\u0131n",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "59",
"issue": "",
"pages": "123--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ozge Nilay Yal\u00e7\u0131n. 2020. Empathy framework for em- bodied conversational agents. Cognitive Systems Re- search, 59:123-132.\u00d6",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A computational model of empathy for interactive agents",
"authors": [
{
"first": "Steve",
"middle": [],
"last": "Zge Nilay Yal\u00e7\u0131n",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dipaola",
"suffix": ""
}
],
"year": 2018,
"venue": "Biologically Inspired Cognitive Architectures",
"volume": "26",
"issue": "",
"pages": "20--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "zge Nilay Yal\u00e7\u0131n and Steve DiPaola. 2018. A compu- tational model of empathy for interactive agents. Bi- ologically Inspired Cognitive Architectures, 26:20- 25.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "M-path: a conversational system for the empathic virtual agent",
"authors": [
{
"first": "Steve",
"middle": [],
"last": "Ozge Nilay Yal\u00e7\u0131n",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dipaola",
"suffix": ""
}
],
"year": 2019,
"venue": "Biologically Inspired Cognitive Architectures Meeting",
"volume": "",
"issue": "",
"pages": "597--607",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ozge Nilay Yal\u00e7\u0131n and Steve DiPaola. 2019. M-path: a conversational system for the empathic virtual agent. In Biologically Inspired Cognitive Architec- tures Meeting, pages 597-607. Springer.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization",
"authors": [
{
"first": "Jingqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "11328--11339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter Liu. 2020a. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In In- ternational Conference on Machine Learning, pages 11328-11339. PMLR.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Personalizing dialogue agents: I have a dog, do you have pets too?",
"authors": [
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2204--2213",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204- 2213.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "DI-ALOGPT: Large-scale generative pre-training for conversational response generation",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Siqi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Yen-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "William B",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "270--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and William B Dolan. 2020b. DI- ALOGPT: Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270-278.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books",
"authors": [
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE In- ternational Conference on Computer Vision, pages 19-27.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Example of empathetic behavior in a conversational agent.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Architecture of the proposed BERT2BERT model initialized with AraBERT checkpoints for Arabic empathetic response generation. makes it difficult for such types of models to operate well in open-domain conversational settings, where generative neural-based models would be more suitable.",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Architectures of the Baseline and Emo-Prepend models used for comparative evaluation against the proposed BERT2BERT model.",
"uris": null,
"num": null
},
"TABREF3": {
"html": null,
"text": "Grouping of emotion labels in the ArabicEm-patheticDialogues dataset as per Parrott's characterization of tree-structured emotions(Parrott, 2001).",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"html": null,
"text": "Example of an Arabic utterance segmentation using Farasa.",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"html": null,
"text": "Performance of the models on the test set in terms of PPL and BLEU score.",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF8": {
"html": null,
"text": "Average evaluation of the collected human ratings.",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Generated Response</td><td>Utterance</td><td>Emotion</td></tr><tr><td/><td/><td>Sadness</td></tr><tr><td/><td/><td>Joy</td></tr><tr><td/><td/><td>Fear</td></tr><tr><td/><td/><td>Joy</td></tr><tr><td/><td/><td>Surprise</td></tr><tr><td/><td/><td>Sadness</td></tr><tr><td/><td/><td>Anger</td></tr><tr><td/><td/><td>Sadness</td></tr></table>"
}
}
}
}