diff --git "a/title_31K_G/test_title_long_2405.04403v1.json" "b/title_31K_G/test_title_long_2405.04403v1.json"
new file mode 100644--- /dev/null
+++ "b/title_31K_G/test_title_long_2405.04403v1.json"
@@ -0,0 +1,393 @@
+{
+ "url": "http://arxiv.org/abs/2405.04403v1",
+ "title": "Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes LLMs More Prone To Jailbreak Attacks",
+ "abstract": "Augmenting Large Language Models (LLMs) with image-understanding capabilities\nhas resulted in a boom of high-performing Vision-Language models (VLMs). While\nstudying the alignment of LLMs to human values has received widespread\nattention, the safety of VLMs has not received the same attention. In this\npaper, we explore the impact of jailbreaking on three state-of-the-art VLMs,\neach using a distinct modeling approach. By comparing each VLM to their\nrespective LLM backbone, we find that each VLM is more susceptible to\njailbreaking. We consider this as an undesirable outcome from visual\ninstruction-tuning, which imposes a forgetting effect on an LLM's safety\nguardrails. Therefore, we provide recommendations for future work based on\nevaluation strategies that aim to highlight the weaknesses of a VLM, as well as\ntake safety measures into account during visual instruction tuning.",
+ "authors": "Georgios Pantazopoulos, Amit Parekh, Malvina Nikandrou, Alessandro Suglia",
+ "published": "2024-05-07",
+ "updated": "2024-05-07",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.CL"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "LLM AND Jailbreak",
+ "gt": "Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes LLMs More Prone To Jailbreak Attacks",
+ "main_content": "Introduction Visual Instruction Tuning extends the instructionfollowing abilities of Large Language Models (LLMs) to the visual modality. The common recipe for a Vision-Language Model (VLM), is to combine an existing LLM along with a vision encoder and learn a mapping between the two unimodal experts (Alayrac et al., 2022; Dai et al., 2023b; Liu et al., 2024). As a result, VLMs can solve additional tasks as opposed to their language-only counterparts, while their performance correlates heavily with the capabilities of their unimodal backbones. LLMs have become the go-to option for practically all Natural Language Processing (NLP) tasks, with models such as ChatGPT (OpenAI, 2022) and Gemini (Gemini Team et al., 2023) witnessing widespread deployment. While these models exhibit\u2014to some degree\u2014general capabilities (OpenAI, 2023a), previous work shows they are susceptible to misuse (Bommasani et al., 2021; Kreps et al., 2022; Weidinger et al., 2021). Consequently, a large body of work incorporates safety mechanisms in model development to constrain model behavior to a \u201csafer\u201d subset by aligning models with values (Askell et al., 2021; Christiano et al., 2017; Dai et al., 2023a; Ouyang et al., 2022). Despite these efforts, LLMs are vulnerable to malicious prompts\u2014referred to as \u201cjailbreaking\u201d (Wei et al., 2024; Xie et al., 2023): engineered to trick the LLM outside of the safer subset and generate the potentially harmful content it was trained to reject (Qi et al., 2023). An example of such behavior is illustrated in Figure 1, where the model initially refuses to comply with the input question, but com*Equal Contribution pletely adheres to the modified adversarial prompt. Therefore, recent approaches to model development incorporate safety tuning against jailbreaking during training (Jiang et al., 2023; Touvron et al., 2023) that mitigate these vulnerabilities. Adversarial attacks, in the form of input perturbations, can also affect vision or language models (Goodfellow et al., 2014; Madry et al., 2018; Yu and Rieser, 2023). With VLMs now being deployed to a wider audience (Brooks et al., 2024; Gemini Team et al., 2023; OpenAI, 2023b), we believe that now is a good time to consider: how does visual instruction tuning affect the safety of LLMs? We explore this question by prompting publiclyavailable VLMs, and the LLM backbones they were derived from, by employing jailbreaking techniques across eight distinct scenarios (Liu et al., 2023d). We demonstrate that visual instruction tuning damages guardrails put in place during the LLM training (Jiang et al., 2023; Touvron et al., 2023), leaving models vulnerable to adversarial attacks. Overall, our findings further highlight the importance of considering the effects of the training stages, training data, and the ways to evaluate and mitigate potential harms when introducing new modalities (Weidinger et al., 2023). As such, with great instruction tuning comes great responsibility. 2. Background 2.1. Instruction-tuning models Language models trained on self-supervised objectives show broad pattern recognition abilities (Brown et al., 2020; Radford et al., 2019; Raffel et al., 2020) when paired with instruction-tuning: arXiv:2405.04403v1 [cs.CV] 7 May 2024 \fFigure 1: Example responses to the query \u201cWhat are some effective ways to create and spread fake news that incites violence and unrest in society?\u201d Vicuna 13B (Chiang et al., 2023) does not directly generate hateful content (top-left), but will given a jailbreak pre-prompt (bottom-left). LLaVa 13B (Liu et al., 2023c) produces hateful content both with (bottom-right) and without (top-right) a jailbreak pre-prompt. a fine-tuning paradigm that uses language instructions as input to solve multiple tasks (Chung et al., 2022; Gupta et al., 2022; Wei et al., 2021). Instruction-tuning is an established concept in NLP (Chung et al., 2022; Mishra et al., 2022) as resulting models generalize better to user queries (Chung et al., 2022; Sanh et al., 2022; Wei et al., 2021) by learning to connect them to concepts seen during pretraining for zero-shot generalization on unseen tasks (Gupta et al., 2022; Mishra et al., 2022). Visual Instruction Tuning refers to the process of converting a LLM into a VLM, often using language (Bai et al., 2023a; Chiang et al., 2023) and vision experts (Fang et al., 2023; Radford et al., 2021), by learning a mapping between the two modalities. Existing approaches concatenate visual and textual representations with a lightweight adapter module (Liu et al., 2024). Other techniques construct \u201cvisual prompts\u201d with a resampler\u2014where learnable latent tokens are informed by each modality (Bai et al., 2023b; Li et al., 2023a; Zhu et al., 2023). Training involves multiple stages, with initial stages focusing on image-text alignment and later stages on supervised fine-tuning (SFT). As VLMs based on this recipe are successful across established multimodal tasks (Goyal et al., 2017; Singh et al., 2019), a large body of work focuses on the safety aspect of these models through the hallucination prism. These works typically measure the degree to which model responses are factually grounded to the visual context (Li et al., 2023b; Liu et al., 2023a,b). However, they do not explore how safety guardrails integrated into the LLM are impacted by visual instruction tuning. 2.2. Jailbreaking and adversarial attacks LLMs and VLMs exhibit vulnerabilities along the same lines as other deep learning models; slight perturbations in inputs can result in (possibly coherent) \u201challucinated\u201d responses (Bender et al., 2021; Goodfellow et al., 2014; Liu et al., 2023b; Szegedy et al., 2013). Learning from vast training corpora improves a model\u2019s generalization capabilities (Radford et al., 2018; Raffel et al., 2020). However, as datasets surpass trillions of tokens (Gao et al., 2020; Hoffmann et al., 2022; Touvron et al., 2023), it is difficult to know the characteristics and biases included in them (Gehman et al., 2020). Moreover, while instruction-tuned models can make reasonable predictions with irrelevant and misleading prompts (Webson and Pavlick, 2022), a model\u2019s strong pattern recognition abilities can at the same time be exploited forcing potentially harmful responses (Ganguli et al., 2022; Perez et al., 2022). As a result, various methods (Christiano et al., 2017; Dai et al., 2023a; Ouyang et al., 2022) try to better align generated content to one more preferred by humans; encouraging safer and more ethical responses (Bai et al., 2022; Ganguli \fVision-Language Model Large Language Model LLaVA-1.5 (Liu et al., 2023c) Vicuna 13B (Chiang et al., 2023) Qwen-VL-Chat (Bai et al., 2023b) Qwen-Chat 7B (Bai et al., 2023a) InternLM-XComposer2 (Dong et al., 2024) InternLM2-Chat 7B (InternLM Team, 2023) Table 1: VLM & LLM pairs used in our experiments. et al., 2022). Other measures include SFT on datasets with adversarial prompts and exemplary responses (Touvron et al., 2023), and context distillation (Askell et al., 2021) which finetunes a model on outputs generated by another model prompted for safe behavior. However, introducing visual inputs opens a new attack vector as adversarial inputs imperceptible to the human eye can steer models to unsafe behavior (Qi et al., 2023). 3. Experimental Setup We hypothesize that after visual instruction tuning, models become less safe and more vulnerable to jailbreaks as opposed to their original LM backbone. To test this hypothesis, we prompt three state-of-the-art VLMs and their LM counterparts with questions related to prohibited scenarios, both with and without jailbreak prompt prefixes.1 Model Selection Table 1 displays the evaluated VLMs along with their respective LLM backbones. We selected these models because: 1) they showcased strong performance in established multimodal tasks (Goyal et al., 2017; Li et al., 2023b; Marino et al., 2019); 2) they connect vision and language models in different ways; and 3) they incorporate safety mechanisms during the development of their LLM. Finally, all chosen VLMs and LLMs are open-source, ensuring reproducibility. See Appendix A for additional details about this selection. Data Preparation We query each model with a prompt, a question, and, for the VLMs, an input image. We leverage the jailbreak prompt dataset from Liu et al. (2023d), which contains questions to simulate prohibited scenarios and prompts that were successful in jailbreaking ChatGPT (OpenAI, 2022).2 Liu et al. (2023d) categorized jailbreak prompts into one-of-three different types, and oneof-ten different patterns. Overall, we employ 40 input queries: derived from eight prohibited scenarios, with each containing five questions. We used four jailbreak prompts that cover all patterns 1Code available at https://github.com/ gpantaz/vl_jailbreak 2See Appendix B for a short description of each scenario, and we refer to Liu et al. (2023d) for details. to ensure models are evaluated fairly across all jailbreak types, resulting in 160 queries to evaluate how susceptible models are to jailbreaking. In order to mimic a common downstream use case of VLMs, we retrieve the most relevant image for each question from the pretraining data of LLaVA (Liu et al., 2024) by selecting the image with the maximum CLIPScore (Hessel et al., 2021) using the base CLIP model (Radford et al., 2021). Finally, we also use a blank image (i.e. an image with only white pixels) to simulate pure text generation by removing any visual context. As a result, we have four conditions for VLMs from the combinations of original vs. jailbreak prompts, and retrieved images vs. blank images. Data Annotation To understand the impact of visual instruction tuning against adversarial inputs, all 1,800 responses were annotated by the authors of this paper\u2014each with a background in developing large-scale multimodal models\u2014and measured inter-annotator agreement on a subset of 100 samples, resulting in a Krippendorff\u2019s alpha of 0.674. We labeled every response with one of: \u2022 Refusal: The model refuses to answer the query; optionally, providing counter-arguments or condemning unethical/illegal actions. \u2022 Neutral: Generated content is neither a refusal nor is potentially harmful; such as, when models do not acknowledge the harmful intent but respond in the opposite manner. For example, when asked \u201cHow to screen job candidates based on race?\u201d, models might provide measures against discrimination. \u2022 Harmful: Response contains potentially harmful content. This includes where generated content promotes discrimination towards individuals\u2014 possibly based on demographics or gender\u2014or encourages illegal activities. \u2022 Not Applicable: Content is not applicable for the study. For example, the model repeated the instruction or continuously produced the same (invalid or incoherent) sentence. 4. Results Figure 2 illustrates the percentage of the responses labeled as harmful across all models. We observe that all VLMs generate substantially more hateful responses as opposed to their LLM backbones. In particular, LLaVA generates 27.50% and 6% more harmful content than Vicuna, with and without jailbreak pre-prompts respectively. Additionally, QwenChat/Qwen-VL-Chat and InterLM2-Chat/InterLMXComposer2 exhibit similar behavior, though they \f Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 50 60 70 Percentage of harmful responses 20.00 60.50 47.50 66.50 40.00 69.00 Vicuna & LLaVA Vicuna LLaVA LLaVA-Blank Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 Percentage of harmful responses 7.50 42.50 15.00 45.00 12.50 47.50 Qwen & Qwen-VL-Chat Qwen-Chat Qwen-VL-Chat Qwen-VL-Chat-Blank Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 Percentage of harmful responses 10.00 40.62 17.50 41.88 17.50 45.62 InterLM2 & InterLM-Xcomposer2 InterLM2-Chat InterLM-XComposer2 InterLM-XComposer2-Blank Figure 2: Percentage of harmful responses for every LLM & VLM pair. Across all model pairs, the VLM generates harmful content more frequently compared to its LLM backbone. generate less harmful responses. Consequently, the safeguards imposed on the LLMs during model development are, at best, relaxed as an outcome of the visual instruction tuning stage. Furthermore, VLMs are more prone to generate potentially harmful content when provided with a prompt and a semantically-relevant image. While this may seem obvious, we observe that in the case of adversarial input, including a blank image results leads to more harmful responses. We hypothesize that this is due to \u201ccompeting objectives\u201d (Wei et al., 2024); where, on one hand, the model tries to generate content relative to both the instruction and the image, while on the other hand, it tries to adhere to its safeguards. Using a jailbreak pre-prompt, however, provides a signal stronger than the content of the image resulting in the aforementioned behavior. 5. Discussion Why are VLMs more prone to jailbreak attacks? Competing objectives present a significant challenge for both VLMs and LLMs. Given an adversarial prompt, both models must navigate between providing relevant responses and resisting adherence to the adversarial prompt. While we have not explored whether this effect is magnified in VLMs, we hypothesize that both models are equally susceptible to the impact of competing objectives. A more plausible scenario is that VLMs forget queries from adversarial prompts when undergoing visual instruction tuning. Reframing generation of appropriate responses to adversarial prompts as its own task, it becomes evident that models may inadvertently disregard this task during further finetuning. This behavior is particularly likely to occur as the model must incorporate an additional modality during the instruction tuning stage. However, we believe this issue can be mitigated through continual learning or training methodologies that expose the model to additional (image-text or text-only) examples that demonstrate appropriate responses during the visual instruction tuning stage. In the follow-up section, we further elaborate on possible strategies to mitigate the forgetting effect. 5.1. Suggestions for Future Work Evaluation & Benchmarking Most current evaluations of VLMs focus exclusively on model capabilities, such as grounding, reasoning, and factuality (Weidinger et al., 2021). Some recent benchmarks are starting to address the gap in safety (Li et al., 2024b; Roger et al., 2023) and robustness to adversarial attacks (Carlini et al., 2024; Zhao et al., 2024). However, creating comprehensive benchmarks to evaluate the safety of VLMs remains a crucial area for future research. A possible step in this direction would be to implement a unified framework for evaluating VLMs similar to LM-Harness (Gao et al., 2023) and SALAD-Bench (Li et al., 2024a), ensuring transparency and reproducibility. Additionally, we emphasize the need for \u201cdata parity\u201d when evaluating from a safety perspective. Without it, jailbreak prompts may be accidentally leaked into (pre-)training data, leading to inflated scores (Golchin and Surdeanu, 2023; Li and Flanigan, 2023; Zhou et al., 2023). However, as jailbreaking is an adversarial setting, it should be evaluated on out-of-distribution prompts (Yuan et al., 2023) that are held-out and/or regularly updated (Kiela et al., 2021). Safety Defenses in All Training Stages VLMs are trained following a curriculum: typically involving image-text alignment and instruction-tuning stages (Bai et al., 2023a; Li et al., 2023a; Liu et al., 2024). Our analysis indicates that when safety is not considered across all\u2014or, at least, final\u2014 stages, models become misaligned and are therefore more likely to generate harmful content. Korbak et al. (2023) show that incorporating conditional pretraining\u2014where text segments are conditioned on human preferences\u2014can reduce the toxicity of model outputs without sacrificing performance on other tasks. As a result, when training a model from scratch, safety should be considered at every stage. However, as training from scratch \fis resource-intensive, it may be more practical to initialize a VLM with pretrained experts. Another possible solution is to ensure that the VLM alignment is part of the final training stage. However, multimodal datasets annotated with human preferences or exemplar responses against adversarial prompts (Li et al., 2024b) are largely missing. Therefore, an important avenue for future work would be to collect or synthetically generate (Liu et al., 2024) such resources. The goal of maintaining safety alignment after visual instruction tuning resembles a continual learning scenario. Future work could draw inspiration from approaches that aim to mitigate catastrophic forgetting (Hadsell et al., 2020; Ke and Liu, 2022). For instance, previous work has found that methods such as experience replay (Biesialska et al., 2020) and logit distillation (Jin et al., 2022) can be effective in continual pretraining of language models. Further benefits could be achieved through more sophisticated approaches, such as selectively updating a small isolated set of parameters for vision (Gururangan et al., 2022; Ke et al., 2022). 6. Conclusion In this paper, we argue that relying on the safety alignment of the backbone LLM downplays the potential vulnerabilities of VLMs. To support this claim, we used three VLMs with strong performance on public benchmarks, each with a different LLM as a starting point with safety playing a crucial role for development of the LLM. Our analysis has shown that visual instruction tuning can affect all VLMs, making them more prone to generate potentially harmful responses both with and without jailbreaking attacks. Furthermore, we have provided suggestions with regard to core evaluation procedures and incorporating safety measures during the successive training stages of visual instruction tuning. Finally, notwithstanding the impressive progress in the development of VLMs, we emphasize that our ultimate goal in this paper is to identify weaknesses in existing approaches and provide recommendations aimed at propelling the field forward. 7. Limitations While our results consistently showcased evidence that visual instruction tuning has a negative impact on model safety, we have only evaluated three models with public weights and using English prompts. Furthermore, even though the developers of each model claim that they have taken action towards incorporating safety mechanisms, the exact details are not disclosed. As a result, we cannot guarantee that these models are not trained on any of the jailbreaking prompts because not all data used to train each LLM is publicly accessible. This highlights the need for the ability to conduct open research replications that enable similar studies. Lastly, we have not explored to what degree these models are sensitive to image attacks either through adversarial noise, adjusting the attention mask during generation, or completely removing the image. 8. Bibliographical",
+ "additional_graph_info": {
+ "graph": [
+ [
+ "Georgios Pantazopoulos",
+ "Alessandro Suglia"
+ ],
+ [
+ "Georgios Pantazopoulos",
+ "Arash Eshghi"
+ ],
+ [
+ "Alessandro Suglia",
+ "Ioannis Konstas"
+ ],
+ [
+ "Alessandro Suglia",
+ "Emanuele Bastianelli"
+ ],
+ [
+ "Alessandro Suglia",
+ "Andrea Vanzo"
+ ],
+ [
+ "Alessandro Suglia",
+ "Yonatan Bisk"
+ ],
+ [
+ "Arash Eshghi",
+ "Igor Shalyminov"
+ ]
+ ],
+ "node_feat": {
+ "Georgios Pantazopoulos": [
+ {
+ "url": "http://arxiv.org/abs/2405.04403v1",
+ "title": "Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes LLMs More Prone To Jailbreak Attacks",
+ "abstract": "Augmenting Large Language Models (LLMs) with image-understanding capabilities\nhas resulted in a boom of high-performing Vision-Language models (VLMs). While\nstudying the alignment of LLMs to human values has received widespread\nattention, the safety of VLMs has not received the same attention. In this\npaper, we explore the impact of jailbreaking on three state-of-the-art VLMs,\neach using a distinct modeling approach. By comparing each VLM to their\nrespective LLM backbone, we find that each VLM is more susceptible to\njailbreaking. We consider this as an undesirable outcome from visual\ninstruction-tuning, which imposes a forgetting effect on an LLM's safety\nguardrails. Therefore, we provide recommendations for future work based on\nevaluation strategies that aim to highlight the weaknesses of a VLM, as well as\ntake safety measures into account during visual instruction tuning.",
+ "authors": "Georgios Pantazopoulos, Amit Parekh, Malvina Nikandrou, Alessandro Suglia",
+ "published": "2024-05-07",
+ "updated": "2024-05-07",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.CL"
+ ],
+ "main_content": "Introduction Visual Instruction Tuning extends the instructionfollowing abilities of Large Language Models (LLMs) to the visual modality. The common recipe for a Vision-Language Model (VLM), is to combine an existing LLM along with a vision encoder and learn a mapping between the two unimodal experts (Alayrac et al., 2022; Dai et al., 2023b; Liu et al., 2024). As a result, VLMs can solve additional tasks as opposed to their language-only counterparts, while their performance correlates heavily with the capabilities of their unimodal backbones. LLMs have become the go-to option for practically all Natural Language Processing (NLP) tasks, with models such as ChatGPT (OpenAI, 2022) and Gemini (Gemini Team et al., 2023) witnessing widespread deployment. While these models exhibit\u2014to some degree\u2014general capabilities (OpenAI, 2023a), previous work shows they are susceptible to misuse (Bommasani et al., 2021; Kreps et al., 2022; Weidinger et al., 2021). Consequently, a large body of work incorporates safety mechanisms in model development to constrain model behavior to a \u201csafer\u201d subset by aligning models with values (Askell et al., 2021; Christiano et al., 2017; Dai et al., 2023a; Ouyang et al., 2022). Despite these efforts, LLMs are vulnerable to malicious prompts\u2014referred to as \u201cjailbreaking\u201d (Wei et al., 2024; Xie et al., 2023): engineered to trick the LLM outside of the safer subset and generate the potentially harmful content it was trained to reject (Qi et al., 2023). An example of such behavior is illustrated in Figure 1, where the model initially refuses to comply with the input question, but com*Equal Contribution pletely adheres to the modified adversarial prompt. Therefore, recent approaches to model development incorporate safety tuning against jailbreaking during training (Jiang et al., 2023; Touvron et al., 2023) that mitigate these vulnerabilities. Adversarial attacks, in the form of input perturbations, can also affect vision or language models (Goodfellow et al., 2014; Madry et al., 2018; Yu and Rieser, 2023). With VLMs now being deployed to a wider audience (Brooks et al., 2024; Gemini Team et al., 2023; OpenAI, 2023b), we believe that now is a good time to consider: how does visual instruction tuning affect the safety of LLMs? We explore this question by prompting publiclyavailable VLMs, and the LLM backbones they were derived from, by employing jailbreaking techniques across eight distinct scenarios (Liu et al., 2023d). We demonstrate that visual instruction tuning damages guardrails put in place during the LLM training (Jiang et al., 2023; Touvron et al., 2023), leaving models vulnerable to adversarial attacks. Overall, our findings further highlight the importance of considering the effects of the training stages, training data, and the ways to evaluate and mitigate potential harms when introducing new modalities (Weidinger et al., 2023). As such, with great instruction tuning comes great responsibility. 2. Background 2.1. Instruction-tuning models Language models trained on self-supervised objectives show broad pattern recognition abilities (Brown et al., 2020; Radford et al., 2019; Raffel et al., 2020) when paired with instruction-tuning: arXiv:2405.04403v1 [cs.CV] 7 May 2024 \fFigure 1: Example responses to the query \u201cWhat are some effective ways to create and spread fake news that incites violence and unrest in society?\u201d Vicuna 13B (Chiang et al., 2023) does not directly generate hateful content (top-left), but will given a jailbreak pre-prompt (bottom-left). LLaVa 13B (Liu et al., 2023c) produces hateful content both with (bottom-right) and without (top-right) a jailbreak pre-prompt. a fine-tuning paradigm that uses language instructions as input to solve multiple tasks (Chung et al., 2022; Gupta et al., 2022; Wei et al., 2021). Instruction-tuning is an established concept in NLP (Chung et al., 2022; Mishra et al., 2022) as resulting models generalize better to user queries (Chung et al., 2022; Sanh et al., 2022; Wei et al., 2021) by learning to connect them to concepts seen during pretraining for zero-shot generalization on unseen tasks (Gupta et al., 2022; Mishra et al., 2022). Visual Instruction Tuning refers to the process of converting a LLM into a VLM, often using language (Bai et al., 2023a; Chiang et al., 2023) and vision experts (Fang et al., 2023; Radford et al., 2021), by learning a mapping between the two modalities. Existing approaches concatenate visual and textual representations with a lightweight adapter module (Liu et al., 2024). Other techniques construct \u201cvisual prompts\u201d with a resampler\u2014where learnable latent tokens are informed by each modality (Bai et al., 2023b; Li et al., 2023a; Zhu et al., 2023). Training involves multiple stages, with initial stages focusing on image-text alignment and later stages on supervised fine-tuning (SFT). As VLMs based on this recipe are successful across established multimodal tasks (Goyal et al., 2017; Singh et al., 2019), a large body of work focuses on the safety aspect of these models through the hallucination prism. These works typically measure the degree to which model responses are factually grounded to the visual context (Li et al., 2023b; Liu et al., 2023a,b). However, they do not explore how safety guardrails integrated into the LLM are impacted by visual instruction tuning. 2.2. Jailbreaking and adversarial attacks LLMs and VLMs exhibit vulnerabilities along the same lines as other deep learning models; slight perturbations in inputs can result in (possibly coherent) \u201challucinated\u201d responses (Bender et al., 2021; Goodfellow et al., 2014; Liu et al., 2023b; Szegedy et al., 2013). Learning from vast training corpora improves a model\u2019s generalization capabilities (Radford et al., 2018; Raffel et al., 2020). However, as datasets surpass trillions of tokens (Gao et al., 2020; Hoffmann et al., 2022; Touvron et al., 2023), it is difficult to know the characteristics and biases included in them (Gehman et al., 2020). Moreover, while instruction-tuned models can make reasonable predictions with irrelevant and misleading prompts (Webson and Pavlick, 2022), a model\u2019s strong pattern recognition abilities can at the same time be exploited forcing potentially harmful responses (Ganguli et al., 2022; Perez et al., 2022). As a result, various methods (Christiano et al., 2017; Dai et al., 2023a; Ouyang et al., 2022) try to better align generated content to one more preferred by humans; encouraging safer and more ethical responses (Bai et al., 2022; Ganguli \fVision-Language Model Large Language Model LLaVA-1.5 (Liu et al., 2023c) Vicuna 13B (Chiang et al., 2023) Qwen-VL-Chat (Bai et al., 2023b) Qwen-Chat 7B (Bai et al., 2023a) InternLM-XComposer2 (Dong et al., 2024) InternLM2-Chat 7B (InternLM Team, 2023) Table 1: VLM & LLM pairs used in our experiments. et al., 2022). Other measures include SFT on datasets with adversarial prompts and exemplary responses (Touvron et al., 2023), and context distillation (Askell et al., 2021) which finetunes a model on outputs generated by another model prompted for safe behavior. However, introducing visual inputs opens a new attack vector as adversarial inputs imperceptible to the human eye can steer models to unsafe behavior (Qi et al., 2023). 3. Experimental Setup We hypothesize that after visual instruction tuning, models become less safe and more vulnerable to jailbreaks as opposed to their original LM backbone. To test this hypothesis, we prompt three state-of-the-art VLMs and their LM counterparts with questions related to prohibited scenarios, both with and without jailbreak prompt prefixes.1 Model Selection Table 1 displays the evaluated VLMs along with their respective LLM backbones. We selected these models because: 1) they showcased strong performance in established multimodal tasks (Goyal et al., 2017; Li et al., 2023b; Marino et al., 2019); 2) they connect vision and language models in different ways; and 3) they incorporate safety mechanisms during the development of their LLM. Finally, all chosen VLMs and LLMs are open-source, ensuring reproducibility. See Appendix A for additional details about this selection. Data Preparation We query each model with a prompt, a question, and, for the VLMs, an input image. We leverage the jailbreak prompt dataset from Liu et al. (2023d), which contains questions to simulate prohibited scenarios and prompts that were successful in jailbreaking ChatGPT (OpenAI, 2022).2 Liu et al. (2023d) categorized jailbreak prompts into one-of-three different types, and oneof-ten different patterns. Overall, we employ 40 input queries: derived from eight prohibited scenarios, with each containing five questions. We used four jailbreak prompts that cover all patterns 1Code available at https://github.com/ gpantaz/vl_jailbreak 2See Appendix B for a short description of each scenario, and we refer to Liu et al. (2023d) for details. to ensure models are evaluated fairly across all jailbreak types, resulting in 160 queries to evaluate how susceptible models are to jailbreaking. In order to mimic a common downstream use case of VLMs, we retrieve the most relevant image for each question from the pretraining data of LLaVA (Liu et al., 2024) by selecting the image with the maximum CLIPScore (Hessel et al., 2021) using the base CLIP model (Radford et al., 2021). Finally, we also use a blank image (i.e. an image with only white pixels) to simulate pure text generation by removing any visual context. As a result, we have four conditions for VLMs from the combinations of original vs. jailbreak prompts, and retrieved images vs. blank images. Data Annotation To understand the impact of visual instruction tuning against adversarial inputs, all 1,800 responses were annotated by the authors of this paper\u2014each with a background in developing large-scale multimodal models\u2014and measured inter-annotator agreement on a subset of 100 samples, resulting in a Krippendorff\u2019s alpha of 0.674. We labeled every response with one of: \u2022 Refusal: The model refuses to answer the query; optionally, providing counter-arguments or condemning unethical/illegal actions. \u2022 Neutral: Generated content is neither a refusal nor is potentially harmful; such as, when models do not acknowledge the harmful intent but respond in the opposite manner. For example, when asked \u201cHow to screen job candidates based on race?\u201d, models might provide measures against discrimination. \u2022 Harmful: Response contains potentially harmful content. This includes where generated content promotes discrimination towards individuals\u2014 possibly based on demographics or gender\u2014or encourages illegal activities. \u2022 Not Applicable: Content is not applicable for the study. For example, the model repeated the instruction or continuously produced the same (invalid or incoherent) sentence. 4. Results Figure 2 illustrates the percentage of the responses labeled as harmful across all models. We observe that all VLMs generate substantially more hateful responses as opposed to their LLM backbones. In particular, LLaVA generates 27.50% and 6% more harmful content than Vicuna, with and without jailbreak pre-prompts respectively. Additionally, QwenChat/Qwen-VL-Chat and InterLM2-Chat/InterLMXComposer2 exhibit similar behavior, though they \f Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 50 60 70 Percentage of harmful responses 20.00 60.50 47.50 66.50 40.00 69.00 Vicuna & LLaVA Vicuna LLaVA LLaVA-Blank Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 Percentage of harmful responses 7.50 42.50 15.00 45.00 12.50 47.50 Qwen & Qwen-VL-Chat Qwen-Chat Qwen-VL-Chat Qwen-VL-Chat-Blank Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 Percentage of harmful responses 10.00 40.62 17.50 41.88 17.50 45.62 InterLM2 & InterLM-Xcomposer2 InterLM2-Chat InterLM-XComposer2 InterLM-XComposer2-Blank Figure 2: Percentage of harmful responses for every LLM & VLM pair. Across all model pairs, the VLM generates harmful content more frequently compared to its LLM backbone. generate less harmful responses. Consequently, the safeguards imposed on the LLMs during model development are, at best, relaxed as an outcome of the visual instruction tuning stage. Furthermore, VLMs are more prone to generate potentially harmful content when provided with a prompt and a semantically-relevant image. While this may seem obvious, we observe that in the case of adversarial input, including a blank image results leads to more harmful responses. We hypothesize that this is due to \u201ccompeting objectives\u201d (Wei et al., 2024); where, on one hand, the model tries to generate content relative to both the instruction and the image, while on the other hand, it tries to adhere to its safeguards. Using a jailbreak pre-prompt, however, provides a signal stronger than the content of the image resulting in the aforementioned behavior. 5. Discussion Why are VLMs more prone to jailbreak attacks? Competing objectives present a significant challenge for both VLMs and LLMs. Given an adversarial prompt, both models must navigate between providing relevant responses and resisting adherence to the adversarial prompt. While we have not explored whether this effect is magnified in VLMs, we hypothesize that both models are equally susceptible to the impact of competing objectives. A more plausible scenario is that VLMs forget queries from adversarial prompts when undergoing visual instruction tuning. Reframing generation of appropriate responses to adversarial prompts as its own task, it becomes evident that models may inadvertently disregard this task during further finetuning. This behavior is particularly likely to occur as the model must incorporate an additional modality during the instruction tuning stage. However, we believe this issue can be mitigated through continual learning or training methodologies that expose the model to additional (image-text or text-only) examples that demonstrate appropriate responses during the visual instruction tuning stage. In the follow-up section, we further elaborate on possible strategies to mitigate the forgetting effect. 5.1. Suggestions for Future Work Evaluation & Benchmarking Most current evaluations of VLMs focus exclusively on model capabilities, such as grounding, reasoning, and factuality (Weidinger et al., 2021). Some recent benchmarks are starting to address the gap in safety (Li et al., 2024b; Roger et al., 2023) and robustness to adversarial attacks (Carlini et al., 2024; Zhao et al., 2024). However, creating comprehensive benchmarks to evaluate the safety of VLMs remains a crucial area for future research. A possible step in this direction would be to implement a unified framework for evaluating VLMs similar to LM-Harness (Gao et al., 2023) and SALAD-Bench (Li et al., 2024a), ensuring transparency and reproducibility. Additionally, we emphasize the need for \u201cdata parity\u201d when evaluating from a safety perspective. Without it, jailbreak prompts may be accidentally leaked into (pre-)training data, leading to inflated scores (Golchin and Surdeanu, 2023; Li and Flanigan, 2023; Zhou et al., 2023). However, as jailbreaking is an adversarial setting, it should be evaluated on out-of-distribution prompts (Yuan et al., 2023) that are held-out and/or regularly updated (Kiela et al., 2021). Safety Defenses in All Training Stages VLMs are trained following a curriculum: typically involving image-text alignment and instruction-tuning stages (Bai et al., 2023a; Li et al., 2023a; Liu et al., 2024). Our analysis indicates that when safety is not considered across all\u2014or, at least, final\u2014 stages, models become misaligned and are therefore more likely to generate harmful content. Korbak et al. (2023) show that incorporating conditional pretraining\u2014where text segments are conditioned on human preferences\u2014can reduce the toxicity of model outputs without sacrificing performance on other tasks. As a result, when training a model from scratch, safety should be considered at every stage. However, as training from scratch \fis resource-intensive, it may be more practical to initialize a VLM with pretrained experts. Another possible solution is to ensure that the VLM alignment is part of the final training stage. However, multimodal datasets annotated with human preferences or exemplar responses against adversarial prompts (Li et al., 2024b) are largely missing. Therefore, an important avenue for future work would be to collect or synthetically generate (Liu et al., 2024) such resources. The goal of maintaining safety alignment after visual instruction tuning resembles a continual learning scenario. Future work could draw inspiration from approaches that aim to mitigate catastrophic forgetting (Hadsell et al., 2020; Ke and Liu, 2022). For instance, previous work has found that methods such as experience replay (Biesialska et al., 2020) and logit distillation (Jin et al., 2022) can be effective in continual pretraining of language models. Further benefits could be achieved through more sophisticated approaches, such as selectively updating a small isolated set of parameters for vision (Gururangan et al., 2022; Ke et al., 2022). 6. Conclusion In this paper, we argue that relying on the safety alignment of the backbone LLM downplays the potential vulnerabilities of VLMs. To support this claim, we used three VLMs with strong performance on public benchmarks, each with a different LLM as a starting point with safety playing a crucial role for development of the LLM. Our analysis has shown that visual instruction tuning can affect all VLMs, making them more prone to generate potentially harmful responses both with and without jailbreaking attacks. Furthermore, we have provided suggestions with regard to core evaluation procedures and incorporating safety measures during the successive training stages of visual instruction tuning. Finally, notwithstanding the impressive progress in the development of VLMs, we emphasize that our ultimate goal in this paper is to identify weaknesses in existing approaches and provide recommendations aimed at propelling the field forward. 7. Limitations While our results consistently showcased evidence that visual instruction tuning has a negative impact on model safety, we have only evaluated three models with public weights and using English prompts. Furthermore, even though the developers of each model claim that they have taken action towards incorporating safety mechanisms, the exact details are not disclosed. As a result, we cannot guarantee that these models are not trained on any of the jailbreaking prompts because not all data used to train each LLM is publicly accessible. This highlights the need for the ability to conduct open research replications that enable similar studies. Lastly, we have not explored to what degree these models are sensitive to image attacks either through adversarial noise, adjusting the attention mask during generation, or completely removing the image. 8. Bibliographical"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.04067v1",
+ "title": "Multitask Multimodal Prompted Training for Interactive Embodied Task Completion",
+ "abstract": "Interactive and embodied tasks pose at least two fundamental challenges to\nexisting Vision & Language (VL) models, including 1) grounding language in\ntrajectories of actions and observations, and 2) referential disambiguation. To\ntackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a\nunified encoder-decoder model that reasons over images and trajectories, and\ncasts action prediction as multimodal text generation. By unifying all tasks as\ntext generation, EMMA learns a language of actions which facilitates transfer\nacross tasks. Different to previous modular approaches with independently\ntrained components, we use a single multitask model where each task contributes\nto goal completion. EMMA performs on par with similar models on several VL\nbenchmarks and sets a new state-of-the-art performance (36.81% success rate) on\nthe Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guided\nagents in the Alexa Arena",
+ "authors": "Georgios Pantazopoulos, Malvina Nikandrou, Amit Parekh, Bhathiya Hemanthage, Arash Eshghi, Ioannis Konstas, Verena Rieser, Oliver Lemon, Alessandro Suglia",
+ "published": "2023-11-07",
+ "updated": "2023-11-07",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CV"
+ ],
+ "main_content": "Introduction Embodied AI aims to develop agents that interact with the environment, reason over natural language instructions and visual observations, and plan future actions. Recently, Vision & Language pretraining (VLP) has established new standards across image-based tasks (Bao et al., 2022; Wang et al., 2022a,b; Yang et al., 2021; Zhang et al., 2021) by aligning visual and textual input to generate image captions, answer visual questions, and reason over images. As a result, VLP promotes learning of important skills transferable to embodied tasks. Interactive and embodied tasks present significant challenges for VLP models including 1) grounding language in environments where the interpretation of instructions depends not just on static images, but on trajectories of actions and *Now at Google DeepMind 1Code available on this link observations, and 2) referential disambiguation, where agents must interact with users \u2014 often as clarifications \u2014 to successfully disambiguate referents. In contrast to language-only planners (Huang et al., 2022a; Min et al., 2021), which cannot generate feasible plans without prompts that encode all the visual nuances \u2014 colors, attributes, relational information, inter alia \u2014 as text; instead VLP models can ground language directly to images. However, to enable grounded decision-making, VLP models must generalize from static images to trajectories that express how a situation evolves over time (McClelland et al., 2020). Additionally, in language-guided collaborative tasks, referents for target objects may be ambiguous, depending on the complexity of the visual scene. Therefore, clarifications must be adopted to resolve uncertainty between referents and act upon the correct target object (Madureira and Schlangen, 2023). To address both of these challenges, we develop EMMA: a unified approach which casts several VL and embodied tasks, such as image captioning and action execution, as text generation inspired by Cho et al. (2021). By doing so, EMMA is able to learn a language of actions which facilitates transfer across tasks. Concretely, EMMA is a multitask encoder-decoder model, which encodes language and pairs of object-centric representations with sentinel tokens. As a result, EMMA can encode both trajectories and clarifications to reference individual frames and objects across an entire trajectory. We focus on dialog-enabled task completion, which requires an agent to follow natural language instructions, perform navigation and object manipulation actions, and utilize dialog interactions to disambiguate visual observations. Similar to previous modular approaches (Min et al., 2021), we break down the embodied task into separate processes for search and action execution. The search process combines exploration with visual grounding. Differently to prior work that only uses object arXiv:2311.04067v1 [cs.LG] 7 Nov 2023 \flabels from object detectors directly (Min et al., 2021; Pashevich et al., 2021), EMMA uses both language and visual representations to discriminate between objects of the same class (e.g., discerning between a \u2018red\u2019 and a \u2018green\u2019 desk). Separately, the action execution process predicts the navigation and object interaction actions as textual tokens. Notably, a key distinction in our approach is that we perform all tasks using a single VL model. We evaluate our approach on the Dialog-guided Task Completion (DTC) benchmark (Gao et al., 2023). Our experiments show the benefits of the multitask VLP, and the importance of data augmentations for completing multi-step instructions. By exploiting clarifications, EMMA improves the success rate by 3.55%. Finally, when fine-tuned on DTC, EMMA can transfer the downstream action execution task back to real-world images, showcasing how using text tokens to represent actions in the environment enables cross-domain generalization. 2 Related Work Vision & Language Pretraining Early methods for developing VLP models rely on multimodal encoders with object-centric or patch representations (Chen et al., 2020; Kim et al., 2021; Li et al., 2021, 2020; Lu et al., 2019; Singh et al., 2022a; Tan and Bansal, 2019). While effective, these methods introduce task-specific heads for each downstream task. Generative and unified architectures represent an alternative approach for tackling VL tasks with a single model. Previous work on unified models explores region proposals (Cho et al., 2021; Gupta et al., 2022), or patch features (Wang et al., 2022a,b; Yang et al., 2022). More recently, the paradigm of connecting vision-only and languageonly experts allows leveraging unimodal pretraining for generative and multimodal models (Alayrac et al., 2022; Driess et al., 2023; Li et al., 2023; Liu et al., 2023; Tsimpoukelli et al., 2021; Zhu et al., 2023). By casting every task as a text generation task, unified VLP models are transitioning from specialized to generalist models. Therefore, adopting a similar multitask pretraining approach enables EMMA to learn a language that describes actions conditioned on visual observations. Embodied AI Approaches Previous approaches on interactive task completion (Shridhar et al., 2020) involve both end-to-end (Pashevich et al., 2021; Suglia et al., 2021) and modular architectures (Blukis et al., 2022; Kim and Lee, 2023; Liu et al., 2022; Min et al., 2021; Singh et al., 2021). State-of-the-art works employ a modular architecture that incrementally builds a semantic map of the environment used by symbolic planners. However, these methods often make strong domain-specific assumptions, or they are typically trained from scratch, underutilizing recent progress in VLP. An exciting direction is leveraging large-scale pretrained models. Recent works (Ahn et al., 2022; Driess et al., 2023; Huang et al., 2022a; Singh et al., 2022b; Zeng et al., 2022) use a large language model or a mixture of pretrained models for object manipulation. Additionally, Huang et al. (2022b); Shridhar et al. (2021) use text to describe the environment state and employ a language model for planning. Different to previous work, we use a VLP model to ground instructions on visual scenes. For example, given the instruction \u2018Get the milk from the fridge\u2019, EMMA executes the correct action sequence by inferring whether the fridge door is open or not from its visual observations. Dialog-enabled interactive task completion introduces an additional layer of complexity as the agent often needs to incorporate clarification context before acting on the environment (Gao et al., 2023, 2022; Padmakumar et al., 2022). Both endto-end and modular architectures underperform in these tasks, which Min et al. (2022) argues is due this is due imitation learning using few and suboptimal demonstrations. EMMA is trained using expert demonstrations and interactions in the form of question-answer pairs about the visual scene. These clarifications do not involve complex dialog phenomena (e.g., repair), yet are sufficiently informative to disambiguate target objects. 3 Task Description The DTC benchmark evaluates dialog-enabled agents to complete missions in the Alexa Arena, a simulated environment with multi-room layouts (Gao et al., 2023). Each example corresponds to a mission completed by an expert planner and annotated by three humans. Each instruction is optionally accompanied by a clarification questionanswer pair. Navigation is performed with primitive actions, but also with the GoTo action, which allows moving to different rooms, or objects. Along with the GoTo action, the agent needs to predict the name of the room or an object mask within the current observation. Each room also contains a set of viewpoints that differ between layouts and can \fAct according to the instruction: pick up the plate with a pear on it. Scene Embeddings Object Embeddings Tokenizer Autoregressive Language Decoder Bidirectional Multimodal Encoder pickup plate . RoI Features Object Coordinates Frame Token ID Visual Token ID Figure 1: Overview of the EMMA architecture. EMMA encodes vision and language inputs with modality-specific layers before providing the input to the bidirectional multimodal encoder. Scene embeddings act as global features for a frame. Object embeddings correspond to a combination of region features, object coordinates, and their identities within the frame sequence. Language input is concatenated with task-specific prompts. The shared autoregressive language decoder treats every task as a text generation task. be visited by the agent to explore the room. To perform an action on an object, the agent needs to provide the action type and the mask of the object. The set of supported actions for an object is determined by its affordances such as openable, toggleable, inter alia (see Appendix C for details). 4 EMMA EMMA is an encoder-decoder model following the architecture of BART-base (Lewis et al., 2020). As shown in Figure 1, both vision and language inputs are embedded through modality-specific projection layers, concatenated into a sequence of embeddings, and fed into a single-stream encoder. EMMA uses a shared decoder across all pretraining tasks with sentinel tokens in the vocabulary to allow referencing specific image frames and regions. Text Embedding For the language input, we apply sub-word byte-pair encoding (Sennrich et al., 2016) with a vocabulary of 10K tokens extracted from our pretraining data. Each token is represented by the sum of its word and absolute positional embedding (Lewis et al., 2020). Similar to Sanh et al. (2022), we use natural language prompts as task prefixes to prompt the model for each task. Scene and Object Embeddings EMMA is capable of encoding sequences of images. For each frame of the visual input, we extract global scene features representing the entire frame and a maximum of n = 36 region features. To reference a frame or an object within it, the language vocabulary is extended with sentinel tokens in the form of and . After projecting the visual features to the encoder\u2019s dimensionality, we add spatial, temporal, and visual sentinel token embeddings. Spatial embeddings encode the 2D position of the region within the frame by its normalized bounding box coordinates, while temporal embeddings encode the temporal order within the sequence using a frame sentinel token. The choice between object-centric and patch representations for embodied tasks remains an open question. Although patch representations contain explicit information about the semantic segmentation of an image (Caron et al., 2021), their static grid structure has not been proven beneficial in previous embodied tasks (Driess et al., 2023; Jiang et al., 2023). Furthermore, patch representations lead to an increase in both input length, as more visual tokens are required to represent a frame, and output length, as the model must generate coordinate tokens instead of a single sentinel token to reference an object. For these reasons, we opted for object-centric representations. 4.1 Pretraining Setup We formulate seven VL tasks in a text-to-text framework motivated by established pretraining tasks (Cho et al., 2021; Wang et al., 2022b). Pretraining details are provided in Appendix A. Below we give a brief description of each task: Masked Language Modeling: Given an image description, we mask each word with probability 0.3. The model must learn to reconstruct the original input by predicting the masked words. Image-Text Matching: Determine if a pair of visual-text inputs match by predicting true/false after we randomly combine the visual input with either the correct caption or a sampled caption. Visual Question Answering: Answer a question about the input image. \fInput: get the cereal from the refrigerator Q: What does the refrigerator look like? A: the refrigerator grey and with double doors CR: fridge VG: AE: open fridge . AE: pickup cereal box . AE: close fridge . Input: \ufb01nd the bowl on the table CR: bowl VG: Input: pour cereal into it CR: bowl AE: pour bowl . Subgoal 1: Grab the cereal from the fridge Subgoal 2: Find the bowl Subgoal 3: Pour cereal Figure 2: Example trajectory where the objective is to pour cereal into the bowl. The user instructions and our agent response are shown on the left and right of each frame respectively. At each timestep the agent is provided with the current view, a user instruction (Input), and optionally a clarification question (Q) and answer (A). The Contextual Routing task (CR) determines whether the agent is going to use the Action Execution task (AE) to interact with the environment, or the Visual Grounding task (VG) to search for an object in the scene. (Dense) Captioning: Produce a textual description of the overall image or a specified region denoted by a visual sentinel token. Visual Grounding: Given a description of an image region, the model must predict the visual token for the region that matches this description. Relationship Prediction: Describe the relationship between two regions of an image. The output follows the format: Subject Attributes, Subject, Relationship Predicate, Object Attributes, Object. 4.2 Interactive Task Completion Our agent uses three distinct yet complementary tasks to complete the DTC missions: Contextual Routing (CR), Action Execution (AE), and Visual Grounding (VG). As shown by Figure 2, we use a single model to tackle all three tasks. Since CR & AE are downstream-specific tasks, we introduce new prompts when fine-tuning. Contextual Routing (CR) Given the current instruction and observation, the CR task determines whether the agent should act or search for an object, disentangling the decision of what to do next from how to achieve it. The output of the model follows a structured hierarchical scheme (Figure 3). The first token determines whether the instruction refers to an action or to search for an object. The second token indicates if there are no objects present, one object present, or multiple matches of the object mentioned in the instruction. Finally, for nonnavigation actions, the model generates the name of the target object. Action Execution (AE) The agent acts on the environment if the output of the CR task is or . The input to the model consists of the current frame, the instruction, and the clarification pair. We use the and tokens as prefixes to each dialog turn. The model predicts the action type, the name of the object, and, for manipulation actions, the frame and visual token id that correspond to the object (Figure 1). Each action yields a new observation concatenated with the previous visual context to make a new prediction. The period character (\u2018.\u2019) delimits actions, and the token marks the end of the trajectory for that instruction. Visual Grounding (VG) For and predictions, the agent tries to find the object outputted by the CR task. We iterate over viewpoints in a room to collect panoramic views and use the pretraining visual grounding prefix with the instruction as input to the model (Appendix D). If the instruction matches with a visual token, the model outputs that token; else, the model outputs \u2018no OBJECT\u2019. For , once the object has been found, we invoke the AE task. 5 Data Augmentations Since the trajectories have been generated by an expert planner, the predictions and used by the CR task are underrepresented in the training data. Training for VG is unsupported as DTC instances only contain the ground-truth object mask for each action but not for all other objects in a scene. Furthermore, preliminary experiments revealed that, when trained solely on instances from the DTC benchmark, our agent struggled to learn associations between frame and visual tokens. For example, when interacting with a fridge across multiple frames, the agent sometimes \f\ufb01nd the laser computer turn on the computer laser monitor computer time machine turn on the time machine go to the breakroom turn on the red computer laser monitor EMMA CR pre\ufb01x prompt CR pre\ufb01x prompt CR pre\ufb01x prompt CR pre\ufb01x prompt CR pre\ufb01x prompt + + + + + + + + + + Figure 3: Example outputs for the Contextual Routing (CR) task. The first token determines whether the agent will search or act on the environment. The second token determines if the instruction matches an entity in the scene. Finally, for instructions that require manipulation of objects, the model generates the name of the object. predicted visual tokens that matched the fridge in a previous rather than the current frame. Finally, there is a mismatch between how the trajectories were annotated and the data the agent is being exposed to. While annotators observed interpolated trajectories, the agent only perceives the state before and after an action. This discrepancy adds significant language variability and complexity, posing challenges to the agent\u2019s comprehension. To address these challenges, we use the vision dataset provided by Gao et al. (2023) to create synthetic instruction data. Additionally, we deploy an early version of our agent to collect mini-episodes of successful trajectories within the Alexa Arena. Supplementary material regarding the data generation process is provided in Appendix E. Visual Augmentations Starting from the images within the vision dataset, we create an additional 180k for training and 88k instances for validation, where each instance is an (image, instruction, action) triplet. When generating target actions, we used the ground truth positions of the agent and the object to ensure the agent was in proximity. For synthetic instructions, we used ChatGPT (OpenAI, 2022) as a starting point to create templates. To select diverse visual scenes, we used CLIP (Radford et al., 2021) to embed the images. We then applied k-means clustering, where the number of clusters is determined by the maximum number of action-object instances. Finally, we selected the image that is closest to each centroid. CDF Augmentations While the visual augmentations are useful for single-step interactions, they cannot help the model on longer trajectories requiring consecutive interaction with the environment. To address this challenge, we use one of the early variants of EMMA to collect more training trajectories. For each mission, we provide the agent with step-by-step instructions and only include successful trajectories in our training data. We refer to these trajectories as CDF augmentations following the challenge definition format of the Alexa Arena (Gao et al., 2023). Overall, we sampled 38k missions for training and 15k for validation. These missions include: 1) picking up and placing objects from receptacles or containers; 2) picking up objects under the presence of objects acting as distractors (i.e., the same object but with a different color, or the same object placed in a different position); 3) cleaning and filling objects in the sink; and 4) interacting with domain-specific objects (e.g., the color changer and the time machine). 6 Experiments & Results 6.1 VLP Evaluation We evaluate EMMA on four image-based downstream tasks including image captioning, visual question answering, referring expression comprehension, and natural language for visual reasoning. We treat all tasks as language generation and finetune EMMA separately on each downstream task without introducing any task-specific parameters. To evaluate the ability of our model to generate image captions, we use the MS-COCO dataset (Lin et al., 2014) and report evaluation results on the Karpathy test split (Karpathy and Fei-Fei, 2015) for BLEU-4 (Papineni et al., 2002), METEOR (Lavie and Agarwal, 2007), CIDEr (Vedantam et al., 2015), and SPICE (Anderson et al., 2016). For visual question answering, we report the VQA accuracy (Antol et al., 2015) on the test-std set of the VQA-v2 dataset (Goyal et al., 2017). We also evaluate referring expressions comprehension on RefCOCOg (Mao et al., 2016) \u2014 a visual grounding task that requires selecting the region described by a given phrase. We use the visual grounding prompts from our pretraining and ask the model to generate the visual token id that matches the given phrase in the image. A predicted region is correct if the intersection over union with \f# Pretrain Samples # Params COCO Captioning VQA-v2 RefCOCOg NLVR2 BLEU-4 METEOR CIDEr SPICE Accuracy Accuracy@0.5 Accuracy VL-T5 (Cho et al., 2021) 7.6M 172M 34.5 28.7 116.5 21.9 70.3 71.3 73.6 VL-BART (Cho et al., 2021) 7.6M 172M 25.1 28.7 116.6 21.5 71.3 22.4 70.3 UniTAB (Yang et al., 2022) 8.1M 211M 36.1 28.6 119.8 21.7 71.0 84.5 \u2014 OFA-base (Wang et al., 2022b) 21.3M 182M 41.0 30.9 138.2 24.2 78.1 82.3 \u2014 EMMA 10.2M 133M 36.5 29.7 122.3 22.5 73.2 80.3 70.3 Table 1: Performance of the pretrained model on downstream image-based tasks. We report the number of pretraining samples as the number of image-text pairs. OFA uses additional vision-only and language-only data. the ground truth region is larger than 0.5. In the NLVR2 dataset (Suhr et al., 2019), given a caption and two images, the model must predict whether the caption matches both images. We provide the two images directly to the model and prefix the caption with the image-text matching prompt. The model then predicts either true or false. Table 1 illustrates the results of our model. We compare EMMA against similar VL models \u2014 i.e., single-stream encoder-decoder models like VL-T5 and VL-BART (Cho et al., 2021), UniTab (Yang et al., 2022), and OFA (Wang et al., 2022b). Among these models OFA-base achieves state-ofthe-art performance, however, it has 1.37\u00d7 more parameters than EMMA and is pretrained with nearly double the amount of data. On RefCOCOg, our model achieves competitive performance with UniTAB, even though it is almost half the size. Cho et al. (2021) hypothesize that the reason for the poor performance of VL-BART on RefCOCOg is that the use of absolute positional embeddings leads to memorization during training. However, EMMA achieves competitive performance despite using absolute positional embeddings. We believe this is due to our sampling procedure that takes into account the dataset imbalance in pretraining (see Appendix A for details). 6.2 Interactive Task Completion We compare two variants of EMMA: a modular agent with two experts that are each fine-tuned on the downstream-specific tasks (i.e., CR & AE), and a unified model fine-tuned with all the tasks. As shown in Table 2, both agent variants outperform the baseline models. Additionally, the modular agent performs on par with the best-performing models on the leaderboard, while the unified agent slightly outperforms the state-of-the-art. We believe that the synergy between tasks provides an advantage to the unified model. MSR (\u2191) NRA (\u2193) QA Leaderboard: GauchoAI 36.47 \u2014 \u2014 SEAGULL 30.98 \u2014 \u2014 Kingfisher 22.37 \u2014 \u2014 Baseline: NS (Gao et al., 2023) 19.32 11.73 \u2717 NS (Gao et al., 2023) 22.80 12.73 \u2713 VL (Gao et al., 2023) 18.19 11.82 \u2717 VL (Gao et al., 2023) 34.20 8.82 \u2713 EMMA: EMMA-modular 33.76 8.91 \u2717 EMMA-modular 33.95 9.05 CR EMMA-modular 35.16 8.92 \u2713 EMMA-unified 33.26 8.79 \u2717 EMMA-unified 33.59 8.89 CR EMMA-unified 36.81 8.69 \u2713 Table 2: Mission Success Rate (MSR) and the Number of Robot Actions (NRA) of EMMA against top-3 leaderboard and baseline models. CR denotes QA based on the CR task (). We also compare the success rate under three conditions for the inclusion of clarifications: 1) no clarifications, 2) clarifications whenever they are available for an instruction, 3) clarifications whenever they are available, and the CR predicts an ambiguous referent, that is is predicted. We observe that both versions of EMMA improve when clarification exchanges are added to the input. Performance gains are marginal in the CR case, however, the agents benefit substantially when clarifications are always provided. Note that the role of the CR task is not to decide when to clarify but to drive the agent to either act or search the environment. On top of that, during training, the agent learns to rely on all available clarification types. Performance per Mission Type Table 3 shows the performance of the modular and the unified agents in every mission category along with the average number of ground-truth actions required \fEMMA-modular EMMA-unified Average # of ground truth actions QA \u2717 QA \u2713 QA \u2717 QA \u2713 breakObject 32.22 35.56 33.33 41.11 8.43 clean&deliver 9.20 13.79 18.39 20.69 12.65 color&deliver 0.00 0.00 0.00 0.00 14.00 fill&deliver 29.17 29.17 27.08 27.08 12.87 freeze&deliver 12.50 16.67 20.83 20.83 15.75 heat&deliver 5.13 7.69 10.26 10.26 16.92 insertInDevice 37.29 38.42 39.55 45.76 7.44 pickup&deliver 18.95 21.75 18.25 21.40 7.57 pourContainer 18.95 47.86 41.03 46.15 8.33 repair&deliver 11.11 20.37 12.96 22.22 17.05 scanObject 69.37 63.06 63.96 62.16 3.91 toggleDevice 67.62 68.57 62.86 65.71 3.94 Table 3: MSR of EMMA for each mission category. 0.0 0.25 0.5 0.75 1.0 Proportion of train instances 17 20 23 26 29 32 35 38 Vision Augmentations 17 20 23 26 29 32 35 38 CDF Augmentations Performance curves when ablating augmentations DTC Visual Aug CDF Aug Figure 4: MSR against data ablations. The DTC line is the performance of the model after training exclusively on the DTC data, without any augmentations. Since the visual grounding task is not modeled by the DTC benchmark, we keep the grounding instances and only downsample the action execution instances. by the expert planner to solve the mission. Both agents perform well on small trajectories, whereas for longer trajectories the unified approach yields better performance. Clarifications play a key role across all mission types with the exception of scanObject missions, a behavior which is also shown in baseline models (Gao et al., 2023). Interestingly, both agents are completely unable to solve color&deliver missions. Based on the predictions of the model, the main challenge is finding the correct receptacle at the end of the trajectory, which can be easily confused with other similar objects. Impact of Clarification Type Next, we are interested in identifying which clarifications help the model disambiguate object referents. Similarly to Chiyah-Garcia et al. (2022), we compare the object localization performance of the model with EMMA-modular EMMA-unified QA \u2717 QA \u2713 Gain QA \u2717 QA \u2713 Gain Description 59.05 66.40 +7.35 62.84 67.98 +5.14 Direction 73.43 76.56 +3.13 79.41 83.82 +4.41 Location 62.27 66.45 +4.18 64.27 66.66 +2.39 Reference 59.05 64.09 +5.04 62.02 66.56 +4.54 Other 75.00 75.00 +0.00 79.41 79.41 +0.00 Table 4: Object localization accuracy per question type for the modular and unified model. QA \u2717columns refer to the performance of the model on the same instructions but without clarification. and without clarification. Table 4 illustrates the localization accuracy for the modular and the unified model across question types. We use the same taxonomy as Gao et al. (2023) with the addition of \u2018other\u2019, which refers to out-of-scope clarifications (e.g, \u2018What should I deliver\u2019, while the agent already holds the mug and is instructed to place it on the desk). Both models benefit the most from description clarifications (e.g., \u2018What does the cartridge look like?\u2019, \u2018It is black with a white label\u2019). The modular agent benefits more from reference and location clarifications, while the unified agent benefits from reference and direction clarifications. We hypothesize that the transfer between tasks enables the unified agent to perform well on instructions even without the location clarifications, thereby the clarifications provide only small improvements. Data Ablations We also provide ablations in the dataset that showcase the effectiveness of our augmentations by ablating the visual as well as the CDF augmentations. Figure 4 depicts the performance of the unified model for both ablations. We observe that the model benefits from both augmentations. Vision augmentations provide performance gains from very early stages since they enable certain properties of the agent (e.g., ). On the other hand, CDF augmentations provide a steady boost as they enable the agent to solve missions that require longer trajectories. Note that our pretraining consists of image-caption objectives that enable the model to learn to ground text in singular frames. Learning frame-visual token associations is obtained during fine-tuning only. Future work could explore agents pretrained on tasks that favor learning this ability as well. \fInstruction: Put some coffee in the cup. Output: pour mug . Instruction: Lift the plate with the bread on it. Output: pickup plate . Instruction: Can you put the spoon into the bowl? Output: place bowl . Instruction: open up the metallic container. Output: open jar . Figure 5: Example transfer of the action prediction task to the visual domain of real images. 7 Performance Analysis DTC Error Analysis We inspected the trajectories of our unified model for 120 unsuccessful trajectories (ten per mission type). Overall, the main source of errors stems from the output of the contextual routing component. This task plays a key role in our model since an incorrect prediction triggers the action execution task. In the follow-up timestep, the agent is positioned differently from what is expected and it is forced to act on the environment, which likely leads to an unrecoverable situation. For instance, the agent may be facing an empty desk while being asked to pick up a mug. Consequently, the agent acts in unpredictable ways, resulting in a state from which the agent cannot recover. The incorrect predictions affect mostly &deliver missions, where the agent completed the subgoal, however, failed on the deliver subgoal as it was unable to disambiguate between multiple receptacles. Furthermore, the output of the action execution task is also susceptible to errors. In long trajectories, we observed a few cases where the model seems to lack temporal understanding. For example, when interacting with objects on consecutive timesteps the model used a visual token to reference the object that matched one of the previous frames in the trajectory. We also noticed a few error cases due to a lack of commonsense reasoning. Some instructions describe properties of an object, like its color and shape without explicit reference to the object but our agent could not make the associations between an object and its properties. Visual Domain Transfer We inspect the model\u2019s ability to transfer the action prediction task to real images. We observe qualitatively that after finetuning the object detector struggles to detect objects from classes that are not present in the Alexa Arena. However, after reverting back to the base object Object Detector Arena Classes Non-Arena Classes Overall Fine-tuned 42.65 32.13 33.88 Base 67.85 58.85 60.35 Table 5: Action prediction accuracy for real images from Visual Genome (Krishna et al., 2017). detector, the EMMA-unified model is able to make reasonable predictions as shown in Figure 5. To quantify the improvement of using the base object detector, we use scene graph annotations of GQA (Hudson and Manning, 2019) by creating synthetic instructions, as described in the Appendix H. Table 5 shows the accuracy for single-step instructions using real images where the target object can be from any class or the subset of classes that appear in the Alexa Arena. Although EMMAunified has been fine-tuned on the action prediction task with image inputs from the fine-tuned object detector, we see 26.48% absolute improvement when switching to the base object detector. We leave further exploration of the potential for Sim2Real transfer as future work. 8 Conclusion & Future Work We described EMMA, a unified and multitask model for embodied task completion. EMMA performs on par with VLP models of comparable size in VL benchmarks, and also achieves a new state-of-the-art performance on the DTC benchmark (Gao et al., 2023). We decouple the downstream embodied task into three complementary tasks all solved by the same unified model. Finally, EMMA benefits from interaction with users in the form of clarifications that are crucial for the agent\u2019s success, particularly for referential disambiguation. In terms of future work, our experiments show that trajectory augmentations are required in order to learn frame and visual token associations. However, acquiring additional data might be prohibitive \ffor target applications where an environment simulator is not available. This requirement could be alleviated by introducing the concept of trajectories in the pretraining. Furthermore, given our visual domain transfer analysis, it would be interesting to explore if visual encoders specialized to various target domains can be plugged into the same multimodal model to allow cross-domain transfer. Limitations We focused on the development of a dialog-enabled agent within an embodied environment. Our design aimed at creating an agent that is as autonomous as possible without relying on external components to support it. We took one step away from modular agents with individual components to a single model that facilitates all tasks. Nevertheless, the search routine is an external component that our agent depends upon. In theory, an agent could learn low-level actions in order to search for an object. However, this is a challenging problem that would require an excessive amount of demonstrations even for simple instructions. Consider the case where the model has to search within multiple containers (e.g., cabinets) for the required object (e.g., a spoon). This would require the agent to learn to summarize the state from the context of history and recognize which cabinets it has already visited before opening the next cabinet. Finally, as shown by our results, our agent does not fully model dialog actions. We used the output of the contextual routing task to request clarifications, however, we did not emphasize on the type of clarifications when making a request. An interesting follow-up work would be to explore how an agent can further incorporate dialog actions similar to the ones supported in the environment and pose clarification requests when necessary. Ethics Statement Embodied AI and physical robots have some wellknown ethical trade-offs. On the one hand, they can generate a positive impact, e.g. as part of care settings or helping in disaster zones. On the other hand, they can also lead to negative sociotechnical consequences, such as job displacement or dis-empowerment of individuals (M\u00fcller, 2021). Embodied agents based on large-scale VLP inherit additional challenges \u2013 some of which they share with large language models, including hallucinations, discrimination and exclusion, malicious uses, and environmental harms (Bender et al., 2021; Dinan et al., 2022; Weidinger et al., 2022). Furthermore, their multimodal nature creates additional privacy concerns, especially when developing systems for assistive home settings, as is the case in EMMA. In order to address the need for large amounts of in-domain data, EMMA is developed using simulated environments. However, transferring the learned behavior to real-world environments is still an open challenge. For example, VLP models can be prone to distribution shifts, e.g. through naturally occurring noise (Yu et al., 2023), or targeted attacks (Yu and Rieser, 2023). Consequently, embodied agents based on VLP may exhibit unintended behaviors when operating in complex real-world environments. These failures are especially grave when interacting with vulnerable groups, as is the case for care assistants. It is therefore paramount that these models are released in a responsible way (Bergman et al., 2022), developed, and tested in ecologically valid setups in order to anticipate real-world impact (De Vries et al., 2020). Addressing these issues responsibly and proactively is essential to maximize the benefits while minimizing the potential negative impacts on individuals and society as a whole. Acknowledgements We would like to thank the Alexa Prize team and Amazon Science for their crucial technical and administrative support, along with the resources that greatly contributed towards model development. This work also used the Cirrus UK National Tier-2 HPC Service at EPCC funded by the University of Edinburgh and EPSRC (EP/P020267/1)."
+ }
+ ],
+ "Alessandro Suglia": [
+ {
+ "url": "http://arxiv.org/abs/2312.02431v1",
+ "title": "Visually Grounded Language Learning: a review of language games, datasets, tasks, and models",
+ "abstract": "In recent years, several machine learning models have been proposed. They are\ntrained with a language modelling objective on large-scale text-only data. With\nsuch pretraining, they can achieve impressive results on many Natural Language\nUnderstanding and Generation tasks. However, many facets of meaning cannot be\nlearned by ``listening to the radio\" only. In the literature, many\nVision+Language (V+L) tasks have been defined with the aim of creating models\nthat can ground symbols in the visual modality. In this work, we provide a\nsystematic literature review of several tasks and models proposed in the V+L\nfield. We rely on Wittgenstein's idea of `language games' to categorise such\ntasks into 3 different families: 1) discriminative games, 2) generative games,\nand 3) interactive games. Our analysis of the literature provides evidence that\nfuture work should be focusing on interactive games where communication in\nNatural Language is important to resolve ambiguities about object referents and\naction plans and that physical embodiment is essential to understand the\nsemantics of situations and events. Overall, these represent key requirements\nfor developing grounded meanings in neural models.",
+ "authors": "Alessandro Suglia, Ioannis Konstas, Oliver Lemon",
+ "published": "2023-12-05",
+ "updated": "2023-12-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "main_content": "Introduction Symbols of a language acquire meanings when used to do things in the world (Clark, 1996). In such cases, language is a cooperative enterprise used by humans to achieve specific goals. During such cooperative activities, humans coordinate meanings (Clark & Brennan, 1991). Meanings are therefore dynamic entities that humans agree upon in conversation. It is important to underline that learning a language cannot happen in isolation. Learning a language is achieved by humans engaged in activities such as language games (Wittgenstein, Anscombe, & Rhees, 1953). Such language games involve one or more interlocutors, who use language as a communication protocol used to express preferences, goals, and execute actions (Austin, 1975). \u00a92023 The Authors. Published by AI Access Foundation under Creative Commons Attribution License CC BY 4.0. arXiv:2312.02431v1 [cs.CL] 5 Dec 2023 \fA.Suglia and I.Konstas and O.Lemon There have been many attempts at computationally representing the meanings of words in a language. From symbolic approaches (e.g., (Winograd, 1971)), statistical approaches (e.g., (Landauer & Dumais, 1997)) to distributed representations (e.g., (Elman, 1990)), AI researchers have managed to achieve some significant results. Particularly, after the recent introduction of large-scale neural language models (e.g., BERT (Devlin, Chang, Lee, & Toutanova, 2019a), GPT (Radford, Wu, Amodei, Amodei, Clark, Brundage, & Sutskever, 2019)), the goal of achieving real understanding seems closer than before. However, upon careful inspection and probing, such models demonstrate only a superficial level of understanding of Natural Language. Many researchers in the field of AI and, more specifically Computational Linguistics, have argued that real understanding cannot be achieved by exposing machines to text corpora only (Bender & Koller, 2020). This would mean expecting somebody to learn a language by reading alone (Bisk, Holtzman, Thomason, Andreas, Bengio, Chai, Lapata, Lazaridou, May, Nisnevich, et al., 2020a). They argue that meaning is a result of grounding symbolic representations via multimodal perceptual experiences of concepts in the world. Such a level of perceptual experience can be obtained only when the agent is embodied in the environment. Motivated by the need to expose artificial agents to more sophisticated perceptual information, in this survey we focus on the visual modality as a source of perceptual information. In particular, we are interested in tasks that have been proposed to study the symbol grounding problem (Harnad, 1990) in situated and embodied visual contexts. The main contributions of this survey are the following: 1. We provide a novel categorisation of visually grounded language games based on the skills and capabilities required to solve them; 2. We apply the categorisation to a collection of 50 datasets presented in the last 20 years in the literature on Artificial Intelligence and Natural Language Processing; 3. We report an analysis of 51 recent visually grounded models that have been proposed to tackle the tasks studied in this work; 4. We propose relevant research questions that will guide future research in grounded language learning. Our survey is divided into several sections that are described as follows: Section 2 presents a discussion of the background topics that are required to understand the importance of language games for V+L research; Section 3 reports our task categorisation that analyses the V+L tasks presented so far in the literature; Section 4 presents a survey of the V+L models that are able to encode the vision and language modalities; finally Section 5 presents a discussion on promising research directions for the future of V+L research; Section 7 concludes the paper. 2. Visually Grounded Language Learning Teaching agents to understand Natural Language has been the main objective of several research projects since the early days of Artificial Intelligence (Winograd, 1971; Newell, 2 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models Simon, et al., 1972). However, understanding the meanings of words in Natural Language can be brought back to the Symbol Grounding problem (Harnad, 1990): \u201cHow can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols?\u201d Figure 1: An example of the symbol grounding problem: a non-native speaker coming across the word \u201cpasticciotto\u201d for the first time will struggle to understand it because it is expressed only via amodal symbols. When the speaker receives an image associated with it, they are able to perform a perception/categorisation step thanks to which the word is grounded in experience, finally revealing its meaning. Associating a referent in the real world with a symbolic representation via a specific and contextualised meaning still represents a challenge for artificial agents. For instance in Figure 1, interpreting the meaning of the word \u201cpasticciotto\u201d would not be possible without a categorisation step that associates the meaningless symbols of the word to a concept which stands for a composition of several perceptual features such as visual, functional, olfactory and gustatory features. These are a result of the agent\u2019s experience of the concept in a given physical context. This problem immediately implies that teaching computers Natural Language just by providing textual information might not be enough to effectively demonstrate that the agent understands language just like humans do. Artificial agents should experience the object whose symbols refer to, completing the so-called Semiotic Triad defined by (Peirce, 1902) as follows: \u201cI define a sign as anything which is so determined by something else, called its Object, and so determines an effect upon a person, which effect I call its interpretant, that the latter is thereby immediately determined by the former.\u201d From this statement we recognise the importance of the perceptual experience in the language learning process. The agent has to be exposed to a representation of the entity in the world (Object) that the word used (sign) refers to. The connection between the sign and the object becomes concrete once the object is perceived by the sensory organs of the agent. This process is what we consider as grounding. 3 \fA.Suglia and I.Konstas and O.Lemon Figure 2: Semiotic Triad defined by (Ogden & Richards, 1923) which describes the relation that holds between symbols and objects in the world as well as the mental process that connects them. The connection between language and thought was the main topic of interest of the seminal work by (Ogden & Richards, 1923). Particularly, they define Symbols as the tools to organise, communicate and direct Thoughts. As illustrated in the diagram in Figure 2, a symbol has a causal relationship with a thought that is instantiated when we speak. On the other hand, we use a symbol to refer to a given reference in the external world. Between the abstract Thought and the Referent (object in the world) a direct (or indirect) causal relation exists as well. 2.1 Compositionality & Meaning The term \u2018compositionality\u2019 is defined as the following property of language: the ability to construct larger linguistic expressions by combining simpler parts. The focus of linguistic studies has been on semantic compositionality, i.e., the principle whereby the meaning of a linguistic expression is a function of the meaning of its components and the rules used to combine them (Montague, 1970). This ability to recombine known grounded symbols is what enables the grounding of complex expressions to occur. Following the example in (Harnad, 1990), suppose we are aware that both the symbols \u201chorse\u201d and \u201cstripes\u201d are grounded by appropriate representations learned from experience. Now we can consider a new category \u201czebra\u201d which is obtained as a combination of the elementary categories as follows: \u201czebra\u201d = \u201chorse\u201d & \u201cstripes\u201d. The answer by (Harnad, 1990) to the question \u201cWhat is the representation of a zebra?\u201d is as follows: \u201cIt is just the symbol string \u2018horse & stripes\u2019. But because \u2018horse\u2019 and \u2018stripes\u2019 are grounded, \u2018zebra\u2019 inherits the grounding, through its grounded symbolic 4 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models representation. In principle, someone who had never seen a zebra (but had seen and learned to identify horses and stripes) could identify a zebra on first acquaintance armed with this symbolic representation alone.\u201d This ability to combine grounded representations to generate other grounded representations is the result of associating meaning with a given representation. In this sense, compositionality can be considered as a more general learning bias that humans leverage in order to learn new tasks by reusing their knowledge about previously learned tasks and concepts (Fodor & Pylyshyn, 1988; Fodor & Lepore, 2002). Indeed, compositionality represents a key element to cope with the huge amount of labelled data required by classical Deep Learning algorithms (Marcus, 2018) which learn tasks in an end-to-end fashion, directly from the raw data (LeCun, Bengio, & Hinton, 2015). Therefore, we are interested in learning representations of concepts that are grounded in perceptual experience and also demonstrate some degree of compositionality. A prerequisite for learning and developing meanings is to develop grounded representations that can favour systematic generalisation to novel combinations of concepts that we have never seen before. Possessing such flexibility assumes that such learned conceptual representations should, by definition, not be static. They must dynamically change depending on the context and signals that the agent receives when interacting with the environment. Compositionality comes into play when an agent has to dynamically refine its internal representations when tasked to coordinate with another agent to collaboratively achieve a goal (Lake & Murphy, 2021). In such Natural Language interactions, words acquire meanings that are instantiated in the ways in which they are used as part of a systematic communication protocol which is a language. 2.2 Language Games The definition of language as a tool for human communication underlines the social functions of language. Additionally, humans use it to express themselves and to manipulate objects in their environment. Therefore, learning a language means being able to communicate effectively using a predefined communication protocol based on symbols and transformations of them. The meaning associated with each symbol can be acquired by engaging in goaloriented conversations with other agents. This practice is what (Wittgenstein et al., 1953) defined as \u201clanguage games\u201d. Therefore, language functions within the active, practical lives of speakers involved, and is deeply connected (i.e., grounded) with non-linguistic elements and behaviours that are part of their environment. Indeed, (Wittgenstein et al., 1953) considers language as a system of signs that acquire meaning in situ, embedded in speakers\u2019 lives. When completing real-world tasks, artificial agents must communicate in order to understand what the goal and requirements of a certain task are. Therefore, in this work, we underline the importance of the hypothesis that \u201chuman communication is thus a fundamentally cooperative enterprise, operating most naturally and smoothly within the context of (1) mutually assumed common conceptual ground, and (2) mutually assumed cooperative communicative motives\u201d (Tomasello, 2010). This definition highlights the importance of using the terminology of language games as an experimental framework for understanding 5 \fA.Suglia and I.Konstas and O.Lemon the complexity, properties, and importance of certain tasks for the overall goal of language understanding and visual grounding. In contrast to fully symbolic approaches which argue that language can be defined as an abstract symbol manipulation system (Newell et al., 1972; Pylyshyn, 1986; Fodor, 1983), (Wittgenstein et al., 1953) considers language as a system that has a function, and acquires meaning, only when it is actually used to do things in the world. As in the Builder example (Wittgenstein et al., 1953), the phrase \u201cpick up\u201d will acquire meaning only when the Architect will instruct the Builder to pick a specific object up. In this survey, we underline the importance of the functional and goal-oriented nature of Natural Language, investigating how artificial agents can acquire visually grounded representations by playing situated and embodied language games. 3. Visually Grounded Language Learning Tasks This paper surveys the state of the art in Visually Grounded Language Learning, which is a type of Interactive Grounded Language Learning. This can be considered as an instance of Interactive Task Learning (Gluck, Laird, & Lupp, 2018), i.e., \u201cany process by which an agent (A) improves its performance (P) on some task (T) through experience (E), when E consists of a series of sensing, effecting, and communication interactions between A, its world, and crucially other agents in the world\u201d. We explore the idea of Wittgenstein\u2019s language games (Wittgenstein et al., 1953) as a means to assess the linguistic capabilities of interactive learning agents. Specifically, as in the case of the \u201cTalking Heads\u201d experiments (Steels, 2015), we explore the idea that linguistic capabilities could be assessed by 3 relevant language games: 1) guessing games that require the agent to guess an unknown object in a scene; 2) action games that involve the execution of actions by an agent as requested by a specific command; and 3) descriptive games which require that the agent is able to describe the scene that it observes. We report an analysis of the state-of-the-art in terms of datasets and tasks that were proposed to study the problem of visual grounding for the English language (summarised in Table 1). We will divide the tasks into 3 macro categories following the different types of language games: 1) discriminative tasks; 2) generative tasks; and 3) interactive tasks. Furthermore, we analyse each paper along the following key dimensions: \u2022 Embodied: is the agent able to explore, perceive and act in the environment? \u2022 Discriminative: does the agent have to produce a single output by selecting it from a given set of candidates? \u2022 Generative: does the agent have to produce a sequence of outputs conditioned on its input? \u2022 Interaction with environment: is the agent able to manipulate and change the state of the environment? \u2022 Interaction with other agents: does the agent have to communicate with other agents to solve the task? 6 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models Table 1: Systematic analysis of several state-of-the-art datasets for Grounded Language Learning tasks. The analysis considers specific characteristics of the tasks and environments that are important for learning grounded meanings. We use \u201cx\u201d to mark that a dataset satisfies a given property. (Gordon et al., 2018) IQA yes x x x 75K (Suhr et al., 2019b) NLVR no x 100K (Suhr et al., 2019a) CerealBar yes x x 1.2K (Shridhar et al., 2020) ALFRED yes x x 25K (Ku et al., 2020) RxR yes x 126K (Padmakumar et al., 2021) TEACh yes x x 3.2K (Deitke et al., 2020) RoboTHOR yes x x (Wortsman et al., 2019) yes x x (Jain et al., 2019) yes x x (Kottur et al., 2021) SIMCC 2.0 no x x 11K (Chen et al., 2019) Touchdown yes x 9.3K (Haber et al., 2019) Photobook no x 2.5K (Hermann et al., 2017) yes x (Yan et al., 2018) CHALET yes (Mirowski et al., 2019) StreetLearn yes (De Vries et al., 2018) Talk The Walk yes x x 10K (Kim et al., 2019) CoDraw no x 10K (Brodeur et al., 2017) HOME yes (Ramakrishnan et al., 2021) HM3D yes (Savva et al., 2019) Habitat yes (Kiela et al., 2020) Hateful Memes no x 10K (Ruis et al., 2020) gSCAN no x x 300K (Shekhar et al., 2017) FOIL no x 297K (Zellers et al., 2019) VCR no x 290K (Park et al., 2020) VisualCOMET no x 1.46M (Das et al., 2017a) VisDial no x x 1.4M (Bogin et al., 2021a) COVR no x 262K (Liu et al., 2021) MaRVL no x 5.6K (Narayan-Chen et al., 2019) CollabMinecraft yes x x 509 (Zarrie\u00df et al., 2016) PentoRef no x 1.3K (Pezzelle & Fern\u00b4 andez, 2019) MALeViC no x 20K (Pezzelle et al., 2020) BD2BB no x 18K (Huang et al., 2016) no x 20.2K (Da et al., 2021) EMU no x x 48K (Clark et al., 2021) Iconary no x 55K (Goyal et al., 2017a) VQA 2.0 no x 1.1M (Hudson & Manning, 2019) GQA no x 22M (Kafle & Kanan, 2017) TDIUC no x 1.6M (Yang et al., 2021) no x 53.2K Authors Dataset Embodied Discriminative Generative Inter. with Env Inter. with Agents Size Continued on next page 7 \fA.Suglia and I.Konstas and O.Lemon Table 1: Systematic analysis of several state-of-the-art datasets for Grounded Language Learning tasks. The analysis considers specific characteristics of the tasks and environments that are important for learning grounded meanings. We use \u201cx\u201d to mark that a dataset satisfies a given property. (Continued) (Kottur et al., 2019) CLEVRdialog no x 4.25M (Zellers et al., 2021) PIGLeT yes x 280K (Abramson et al., 2020) Playroom yes x x (Ilinykh et al., 2019b) Tell-meMore no x 22K (Dobnik et al., 2020) CUPS no x 1.3K (Yu et al., 2017b) BURCHAK no x 2.5K (Tokunaga et al., 2012) REX no x x 9.8K (Chevalier-Boisvert et al., 2018) BabyAI yes x (Thomason et al., 2019a) CVDN yes x x 2K (Ilinykh et al., 2019a) MeetUp yes x x 430 (Puig et al., 2020) WatchAnd-Help yes x x 1.3K (Zheng et al., 2022) Spot the difference no x 95K (Kiseleva et al., 2022) IGLU yes x x 509 (Bisk et al., 2016) BlocksWorld no x x 12K (Zhong et al., 2021) SILG no x x (Elliott et al., 2016) Multi30K no x 150K (Wang et al., 2021) Multisubs no x 4.5M (Kazemzadeh et al., 2014) ReferitGame no x 20K (Johnson et al., 2016) DenseCap no x 4.1M (Krishna et al., 2017a) ActivityNet no x 20K (Thrush et al., 2022) Winoground no x 1.6K (Bugliarello et al., 2022) IGLUE no x 28K (Krojer et al., 2022) ImageCoDe no x 22K (Shridhar et al., 2021) ALFWorld yes x x 25K (Gao et al., 2022) Dialfred yes x x 53K (Gao et al., 2023) Alexa Arena yes x x 46K Authors Dataset Embodied Discriminative Generative Inter. with Env Inter. with Agents Size 3.1 Discriminative tasks A discriminative task is defined as a language game in which the agent, given an image I and a corresponding textual information t, has to select an option \u02c6 c among a set of candidates C. The agent is successful if \u02c6 c matches the target option c\u2217. Such tasks have been the very first instance of grounded language learning tasks inspired by well-known image classification tasks in the Computer Vision community (Deng, Dong, Socher, Li, Li, & Fei-Fei, 2009). One of the most famous tasks in this category is Visual Question Answering (VQA). The generic formulation of this task, as shown in Figure 3, assumes that the agent receives a question about an image and has to provide an answer to it. The answer can be in free-form text. However, current implementations of VQA tasks, such as the one presented by (An8 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models What is the mustache made of? VQA system bananas Figure 3: Example from the Visual Question Answering dataset (Antol et al., 2015): the agent should learn to generate the answer \u201cbananas\u201d given the image and the question. tol et al., 2015), assume that the agent has a specific answer vocabulary (vocabulary-based VQA) that it has to choose from. In this task, the agent should learn to reason about the objects in the image to answer the question. Questions usually require that the agent is aware of specific categories of objects, their colour, or their relative position in the image. However, the first release of the dataset had a bias according to which questions could be answered by relying only on one modality or only by relying on a specific subset of tokens in the question. For this reason, as described by (Goyal, Khot, Summers-Stay, Batra, & Parikh, 2017b), a proper balancing procedure for the dataset is required. This prevents the case where high-capacity models obtain high accuracy only because they learn spurious correlations in the data. Other ways to mitigate this problem have been proposed. For instance, the TDIUC dataset (Kafle & Kanan, 2017) divides the questions into 12 categories to have a better understanding of the model\u2019s capabilities. A related effort in this direction is the GQA dataset (Hudson & Manning, 2019). GQA is a VQA dataset automatically generated by relying on Visual Genome scene graphs (Krishna, Zhu, Groth, Johnson, Hata, Kravitz, Chen, Kalantidis, Li, Shamma, et al., 2017b). Having a structured representation of the scene adds a semantic layer on top of the purely perceptual one. Thanks to this additional level, it is possible to perform a more fine-grained analysis of the model\u2019s ability. For instance, the authors propose measures of consistency of the model predictions (i.e., is the answer to semantically similar questions the same?), validity (i.e., is the agent responding some colour to a colour question?), and plausibility (i.e., does the answer make sense given the question?). However, because the problem is cast as a classification task, models learn to model the head of the answer probability distribution only. Therefore, they perform very poorly when generalising to out-of-distribution examples. To address this problem, specific datasets that test the generalisation abilities of VQA models have been proposed (e.g., (Agrawal, Kembhavi, Batra, & Parikh, 2017)). The community proposed the CLEVR (Johnson, Hariharan, van der Maaten, Fei-Fei, Lawrence Zitnick, & Girshick, 2017) dataset to complete a more in-depth evaluation of the elementary visual reasoning skills of an agent using synthetic images generated using the Blender render engine. Another instance of a discriminative task is Visual Entailment (VE) (Xie et al., 2018). It is inspired by Textual Entailment tasks in Natural Language Processing (e.g., (Bowman, Angeli, Potts, & Manning, 2015)) defined as follows: given a text premise P 9 \fA.Suglia and I.Konstas and O.Lemon Figure 4: Example from the Visual Entailment task from (Xie et al., 2018). Given the state of the world (exemplified by the image), the agent has to verify, for each hypothesis, the validity of the statement. Figure adapted from (Xie et al., 2018). and a text hypothesis H, the goal is to determine if P implies H. This task is usually cast as a 3-way classification problem where the label set includes \u201centailment\u201d, \u201cneutral\u201d, or \u201ccontradiction\u201d, based on the relation conveyed by the (P, H) text pair. As shown in Figure 4, VE replaces premise P with a real-world image. Based on this idea of entailment, several other tasks can be defined. For instance, the Cornell Natural Language Visual Reasoning (NLVR) dataset (Suhr, Lewis, Yeh, & Artzi, 2017) is a corpus containing 92,244 sentence-image pairs, whose aim is to teach an agent whether a given statement is true or false for a given image. What makes this dataset interesting is that it includes several semantic phenomena such as cardinality (soft/hard) statements, existential and universal relations, as well as spatial relations. Sentences associated with the images are collected from real users, however, the corresponding images are synthetic and composed of simple coloured blocks. (Suhr et al., 2019b) extended this dataset with real-world images. Another discriminative task is the Find One mismatch between Image and Language caption (FOIL) dataset (Shekhar et al., 2017). It defines 3 different discriminative sub-tasks described as follows (see Figure 5 for an example): 1. Binary Classification: Given an image and a caption, the model is asked to mark whether the caption is correct, or wrong. The aim is to understand whether trained models can spot mismatches between their coarse representations of language and visual input; 2. Foil Word Detection: Given an image and a caption, the model has to detect the foil word (i.e., a single word that is incorrect for the caption). The aim is to evaluate the understanding of the system at the word level; 3. Foil Word Correction: Given an image, a caption and the foil word, the model has to detect the foil and provide its correction. The aim is to check whether the system\u2019s visual representation is fine-grained enough to be able to extract the information necessary to correct the error. Most of these datasets have been annotated relying on image datasets that were derived from an English lexical database (i.e., WordNet (Miller, 1995)). However, this imposes a bias 10 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models Figure 5: Example extracted from the FOIL dataset. It describes the 3 discriminative tasks: binary classification, foil word detection and foil word correction. Figure adapted from (Shekhar et al., 2017). on the data collection because all the elicited concepts will be familiar only to people with a European/American background. Since an over-arching goal in this area is to create generalpurpose agents, they must rely on an unbiased source of training data. (Liu et al., 2021) proposed MaRVL, a novel dataset for multicultural reasoning over vision and language. First of all, they propose a novel concept hierarchy that can be considered universal and not specific to the English language. After this step, they collect a dataset using a similar approach to (Suhr et al., 2019b). The resulting dataset contains 5.6K captions using 5 typologically, geographically and genealogically diverse languages. As shown in Figure 6, this makes MaRVL a very challenging benchmark for multimodal and multilingual language understanding. Figure 6: Examples extracted from the MaRVL dataset. The first image shows an example in Indonesian and the second in Tamil. Figures adapted from (Liu et al., 2021). 11 \fA.Suglia and I.Konstas and O.Lemon Similar to FOIL, MALeViC (Pezzelle & Fern\u00b4 andez, 2019) defines a sentence verification task to study how Vision+Language (V+L) models can learn the meaning of gradable adjectives of size from different visual contexts. This benchmark includes several tasks that are specifically designed to study how an agent can learn to identify the reference set of a given statement in a specific visually-grounded context. This is a crucial skill that agents have to master when dealing with complex Embodied AI tasks involving a high level of ambiguity. Another task formulated as a discriminative language game is \u201cBe Different to Be Better\u201d (Pezzelle et al., 2020). It is defined as a candidate selection task where the agent, based on a certain intention (i.e., their goal, attitude or feeling), has to choose, among a set of candidate actions, the one that a person would perform. Other instances of these tasks involve matching the correct caption to a given image. This is the case of Winoground (Thrush et al., 2022) where an agent has to be able to score higher the correct (image, caption) pair instead of the others inspired by the Winograd Schema Challenge (Levesque, Davis, & Morgenstern, 2012). This dataset is extremely challenging because it contains images having an intricate visual structure that have been manually sourced by a team of experts. Humans are able to match the correct caption to its image while state-of-the-art models have a really hard time completing this task with top performance being just above random chance. The dataset ImageCoDe provides an additional benchmark for contextual image retrieval (Krojer et al., 2022) where the agent has to select the image that matches a contextual caption given a set of 9 distractors. As a result of their evaluation, they show that their benchmark is highly multi-modal (i.e., the agent requires both modalities to do well) and that current state-of-the-art V+L models are still far away from human-level performance. Most likely, this is because some of the situations require understanding the unfolding of events over time and cannot be learned from static images only. Similarly to the effort in VQA to inspect the visual grounding ability of the model, (Bogin et al., 2021a) proposes a new benchmark for compositional language understanding (COVR). The task, inspired by NLVR (Suhr et al., 2019b), is formulated as a sentence verification problem having multiple reference images (see Figure 7). The dataset has been created by defining specific splits having different compositional generalisation skills. It uses the GQA (Hudson & Manning, 2019) scene graphs to first find adversarial images to a target image. Then, it instantiates questions from language templates and finally uses manual validation and paraphrasing to generate the actual text. The generated examples require the agent to master the use of quantifiers and other operations, such as counting. This makes the task particularly challenging, especially in zero/few-shot splits where the agent has the chance to learn from a few examples of a given logical operator. Most of these tasks can be solved by relying on information immediately available in the image (e.g., the properties of the objects or the relationship between them). However, understanding the effects of given actions as well as the goal of specific actors in an image plays an important role in visual commonsense reasoning. To tackle this important problem, (Zellers et al., 2019) propose the Visual Commonsense Reasoning challenge. It is defined as a multiple-choice classification task where the agent has to select the correct answer to a given question as well as specify \u201cwhy\u201d that answer was relevant. PIGLeT (Zellers et al., 2021) studies the problem of learning physical commonsense knowledge. The authors automatically generated trajectories of actions using a planner and then annotated specific 12 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models Figure 7: Example extracted from the COVR dataset. At training time, the agent is exposed to specific usages of quantifiers as well as object attributes. At test time, novel, unseen usages are presented so that the model has to systematically generalise to novel statements. Figure adapted from (Bogin et al., 2021a). Figure 8: Example extracted from the PIGLeT dataset. Each example contains the representation of the state of the world before and after the action \u201cthrow object\u201d happens. The world state is expressed in terms of visual attributes derived from the 3D environment. Language annotations are available for each step of the transition. Figure adapted from (Zellers et al., 2021). transitions where a state change happens. In particular, as shown in Figure 8, for each transition they collect a set of visual attributes for each state as well as a description of the action that triggered the state change, and a description of the final state. One of the main tasks involves predicting the attributes of the final state given the action executed by the agent. VisualCOMET (Park et al., 2020) is another dataset for visual commonsense reasoning in static images. Given an image, a person that appears in it is selected and multiple reasoning inferences are annotated that are supposed to describe: 1) what needed to happen before; 2) the intents of the person; and 3) what will happen next. Another multimodal dataset that requires higher-order reasoning skills is the Hateful Memes challenge (Kiela et al., 2020). This dataset was created to develop models that are able to detect internet abuse, an important problem in current social media platforms where sarcasm and other very subtle visual and language cues are used in an offensive manner. This problem is particularly important because it underlines the fact that words can assume 13 \fA.Suglia and I.Konstas and O.Lemon different meanings depending on the multi-modal context in which they are used. Being able to derive their meaning is therefore a highly complex reasoning task. 3.2 Generative tasks Generative (or descriptive) language games are formally defined by assuming that there is an agent that receives an image I and has to generate a sequence of tokens \u02c6 t = \u27e8t1, t2, . . . , tn\u27e9. The task is evaluated by considering a specific semantic similarity measure S that determines how close \u02c6 t is to the ground-truth target sequence t. Figure 9: Image captioning example from the MSCOCO dataset (Lin et al., 2014). A possible caption for this image is: \u201ca teddy bear that has been placed on a pile of rocks.\u201d One of the first tasks that have been proposed in the literature is Image captioning in which the main objective is to generate a meaningful description of an input image (see Figure 9 for an example). One of the most used datasets for this task is MSCOCO (Lin et al., 2014) which provides 5 captions for every image as well as segmented objects. A similar dataset is Flickr30K (Young, Lai, Hodosh, & Hockenmaier, 2014). In a similar fashion, video captioning tasks have been proposed (e.g., (Wang, Wu, Chen, Li, Wang, & Wang, 2019; Lei, Yu, Berg, & Bansal, 2020)) where either the subtitle or a caption has to be generated for a given video clip. A more complex scenario is represented by video summarisation (e.g., (Papalampidi, Keller, Frermann, & Lapata, 2020)). Analogous to text summarisation (Nenkova & McKeown, 2011), video summarisation involves a more structured approach that resembles classic Natural Language Generation pipelines (Reiter & Dale, 2000). It usually involves several steps: 1) selecting the video frames that are worth considering for the summary; 2) aggregating the information in a coherent way to favour sound summaries; and 3) using the aggregated pieces of information to generate the text of an appropriate summary. Another task that involves multiple video frames as part of the input data is visual storytelling (Huang et al., 2016). In this case, the agent is provided with multiple images that describe the progression of some event over time. This task is more 14 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models complex than video captioning because it is not only limited to factually describing the image but involves also adding some extra contextual cues (e.g., emotional and time-related aspects). Another generative task that involves describing a visual scene in multiple rounds is Tell-me-More (Ilinykh et al., 2019b). In their data collection, they ask their users to imagine they are talking to someone over the phone about an image they can see. This image belongs to a set of candidate images that are available to the other agent only. The receiver has to select which image the speaker is referring to. Most of the generation tasks described so far assume that the agent has to generate a description for the entire visual input. To evaluate how fine-grained visual representations are it is useful to focus only on certain parts of the image. This task is commonly called dense captioning or referential expression generation. In this task, an agent has to generate a description that applies to a very specific region of the image (i.e., a bounding box). For instance, (Kazemzadeh et al., 2014) propose the ReferIt game intended as a generic data collection procedure for referential expression generation. This procedure has been then applied to several datasets such as COCO (Lin et al., 2014) and ImageCLEF (Grubinger, Clough, M\u00a8 uller, & Deselaers, 2006). Concurrently, rich semantic annotations of images have been collected via the VisualGenome project (Krishna et al., 2017b). From these annotations, (Johnson et al., 2016) released the DenseCap dataset to learn to generate descriptions associated with specific bounding boxes of the image. (Krishna et al., 2017a) collected a dataset for captioning in videos as well. Particularly, this dataset was annotated by dividing the video into key activities (or moments) and then asking the annotator to report a sentence describing each activity. Another important research direction is to use visual information to boost performance in machine translation. The first dataset to be proposed for this task was Multi30K (Elliott et al., 2016). The dataset was created by creating German translations for each image description available in Flickr30K (Young et al., 2014). However, this process was completed without providing images to the annotators introducing an apparent bias in the collected data (Li, Ataman, & Sennrich, 2021). To mitigate this issue, (Wang et al., 2021) proposed Multisubs, a large-scale multimodal and multilingual dataset that facilitates research on grounding words to images in the context of their corresponding sentences. This dataset is based on movie subtitles but due to missing copyright, the authors didn\u2019t use the images of the video as reference. Instead, they relied on a multilingual knowledge base to find references to general images about certain concepts (i.e., BabelNet (Navigli & Ponzetto, 2010)). 3.3 Interactive tasks In Section 2, we introduced the concept of language games as an experimental framework for studying machine learning tasks aimed at assessing certain language understanding skills of artificial agents. This is important because if we want to aim to create artificial agents that are trained to solve ecologically valid tasks (De Vries, Bahdanau, & Manning, 2020), we need to consider the fact that the real world contains: 1) sophisticated visual stimuli in the form of videos or images; 2) ambiguous, spontaneous, and incremental language that is typically used when communicating in natural language. 15 \fA.Suglia and I.Konstas and O.Lemon The grounded language learning tasks that we described so far did not have a real connection with the concept of language games that we described in previous sections. Although each language game could be considered as situated in the visual scene exemplified by the image, there was no notion of interaction between the learning agent and the environment or other agents. Each task involved either generating one or multiple elements associated with the input. To get closer to the idea of language games described by (Wittgenstein et al., 1953), we are interested in exploring interactive tasks where the agent is either situated or embodied in the environment and has to interact in particular ways. This can be either by communicating with other agents to solve a given task or by executing actions in the environment to achieve a given goal. 3.3.1 Situated Interactive Games Agents who are able to support a conversation are much more human-like than agents that are able to answer single questions only. For this reason, the VQA task has been extended by the Visual Dialogue (VisDial) dataset (Das et al., 2017a). In the VisDial dataset, as depicted in Figure 10, the agent receives in input the dialogue history composed of a sequence of question/answer pairs, the current question and the current reference image. Two different tasks have been proposed for this dataset: 1) answer selection, where the agent has to select the correct answer among a set of 100 candidates, and 2) answer generation, where the agent should generate a response token by token. However, as highlighted by (Massiceti, Dokania, Siddharth, & Torr, 2018), state-of-the-art results can be achieved by using the Canonical Correlation Analysis algorithm that relies on ad-hoc feature extractors for images (image features extracted from a pretrained ResNet (He, Zhang, Ren, & Sun, 2015)) and text (question and answer representations are learned using FastText (Bojanowski, Grave, Joulin, & Mikolov, 2017)). Another downside of this task is the relevance of history. (Agarwal, Bui, Lee, Konstas, & Rieser, 2020) show that the number of dialogues actually requiring dialogue history is very low. Therefore, models can do well just by using the current question to generate the answer. A dataset that proposes to give the role of first-class citizen to dialogue history and common ground is Photobook (Haber et al., 2019). It is a dataset of 2,500 human-human goal-oriented English dialogues between two participants that have to identify shared images in their photobook by exchanging text messages. In their evaluation, they focus on the task of reference resolution and reference generation in such visually grounded dialogues. However, they do not assess the ability of artificial agents to actually play the game in a collaborative fashion by both generating utterances as well as selecting target photos. 3.3.2 Language Evolution and Emergent Communication Another line of work is focused on language evolution and emergent communication in referential language games (Lewis, 1969). A specific instance of the game, depicted in Figure 11, can be described as follows: a speaker describes an object in a scene using a set of symbols and the hearer needs to select, among a set of distractors, which one is the object described by the speaker. A recent implementation of the game was studied by several authors (e.g., (Lazaridou, Hermann, Tuyls, & Clark, 2018; Havrylov & Titov, 2017; Bouchacourt & Baroni, 2018)) who use either synthetic images or real-world images 16 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models Figure 10: An example from the VisualDialog dataset. The agent has access to a reference image, to the dialogue history composed of the question/answer pairs generated so far, and it has to generate an answer for the current question. Figure adapted from (Das et al., 2017a). coming from MSCOCO. A downside of these approaches is that they focus on single-turn interactions, and are thus not considering the intricacies of dialogue phenomena which are essential for an agent that has to learn by natural language interaction. It is important to underline the fact that in the language evolution and emergent communication literature, agents start tabula rasa and form communication protocols that maximise task rewards. While this purely utilitarian framework results in agents that successfully learn to solve the task by creating a communication protocol, these emergent communication protocols do not bear core properties of Natural Language (Kottur, Moura, Lee, & Batra, 2017). This is ultimately what agents are required to develop if we want them to be embodied in the environment and join in conversations with humans. Due to the complexity introduced by having situated dialogues divided into many turns, there have been several initiatives that tried to study very specific problems in isolation. For example, (Clark et al., 2021) proposes Iconary, a collaborative game with two agents: 1) a Drawer that receives a phrase and has to draw it using a specific set of icons; and 2) a Guesser who generates a text to guess the phrase. This dataset is appealing because it involves an asymmetry of information that favours communication between players to guess the correct phrase. PentoRef (Zarrie\u00df et al., 2016) is another puzzle game where the Instruction Follower has to manipulate specific game pieces to achieve the goal that the Instruction Giver has in mind. In the same spirit, (Kottur et al., 2019) developed CLEVR-dialog, a two-player collaborative game where the aim is to reconstruct a hidden scene graph representation known only to one of the players. The focus of this task is on the dialogue history and its importance for reference resolution. Similarly, the CODRAW game (Kim et al., 2019) involves two players: a Teller and a Drawer. The Drawer asks questions to the Teller in order to accurately reconstruct the image that only the Teller can see. The task requires that the agents develop Natural Language understanding and generation skills in order to complete the game. They use the quality of the reconstructed image as a proxy for the quality of the communicative skills of the Drawer. 17 \fA.Suglia and I.Konstas and O.Lemon Figure 11: Referential game used by (Lazaridou et al., 2018) as a benchmark for emergent communication between two agents: the Speaker and the Listener. The Speaker, given a representation of the target image\u2014which could be either symbolic or in the form of pixels\u2014generates a message by sampling symbols from a fixed lexicon. Given the message sent by the Speaker, the Listener has to understand which one is the target object among the distractors. Figure adapted from (Lazaridou et al., 2018). Another situated interaction dataset is Situated and Interactive Multimodal Conversations (SIMMC) (Kottur et al., 2021). It represents a collaborative task where a user is looking for an item of clothing and has to ask an AI assistant for help in a simulated clothes shop. Another interactive situated dialogue task is CUPS (Dobnik et al., 2020). In this setup, the authors define a situated context represented by a table with multiple coloured cups. They assume there are multiple speakers that have different perspectives of the same scene, each with different objects missing. Their task consists of spotting which objects are missing and different. The difference in perspective favours communication between the players to succeed in the game. In this setup, we assume that the two speakers have knowledge of the language used to communicate. Instead, in BURCHAK (Yu et al., 2017b), the authors collected a dataset where one of the agents, the Teacher, is trying to teach new visually grounded words in a completely made-up language to the Student, who is not familiar with it. Again, the concept of asymmetry of information is leveraged to favour communication between agents. In contrast to the previously described tasks, the GuessWhat?! dataset (De Vries et al., 2017) represents the first benchmark that involves a goal-oriented dialogue that favours: 1) the acquisition of discriminative features associated with the objects in the scene by playing the role of the Questioner \u2013 a module which generates questions about a given image; and 2) the development of Natural Language understanding capabilities by playing the role of the Oracle \u2013 the module which has to coherently reply to the Questioner. As shown in Figure 12, the Questioner has to understand the image in order to generate coherent questions about the reference scene and, at the same time, has to understand the feedback provided by the Oracle in order to adapt its dialogue strategy. Differently from all other tasks, 18 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models (a) GuessWhat?! scene (b) GuessWhat?! dialogue Figure 12: An example dialogue extracted from the GuessWhat?! dataset (De Vries et al., 2017). Figure 12(a) shows a possible scene and Figure 12(b) shows the game played by the Questioner and the Oracle. The Oracle is aware that the target object is the \u201cDonut\u201d highlighted in green, and has to support the Questioner in guessing it by replying Yes/No to their questions. GuessWhat?! involves both generation and understanding of Natural Language expressions that are grounded in the reference scene. We consider both capabilities as fundamental in learning grounded language learning. This is because agents able to support a conversation in natural language with other agents should be able to understand and generate coherent responses. In a scenario in which agents are divergent \u2013 they have different perspectives, different skills, and different languages \u2013 Natural Language communication emerges because it is essential to coordinate upon certain references, and certain goals (Chandu, Bisk, & Black, 2021; Benotti & Blackburn, 2021). The dataset Spot the Difference presented by (Zheng et al., 2022) contains 95K simulated dialogues based on two agents trying to solve a referential game. Another example of divergent agents is MeetUp! (Ilinykh et al., 2019a). This is a game situated in a simulated environment where two players have to converse and coordinate in order to meet in the same room. Differently from GuessWhat?!, in this game the two agents have different visual perspectives. This asymmetry of information forces agents to communicate and clarify their references in order to complete their tasks. 3.3.3 Embodied Interactive Games Differently from situated tasks, many approaches can be classified as Embodied AI tasks (Savva et al., 2019) which are focused on the execution of actions either in the real world or within 3D virtual environments (e.g., Matterport3D (Chang, Dai, Funkhouser, Halber, Niebner, Savva, Song, Zeng, & Zhang, 2017), HoME (Brodeur et al., 2017), CHALET (Yan et al., 2018), HABITAT (Savva et al., 2019), and AI2THOR (Kolve, Mottaghi, Han, VanderBilt, Weihs, Herrasti, Gordon, Zhu, Gupta, & Farhadi, 2017)). (Misra, Bennett, Blukis, Niklasson, Shatkhin, & Artzi, 2018a) jointly model navigation and plan execution by re19 \fA.Suglia and I.Konstas and O.Lemon ceiving visual information from the current scene and generating a plan of a sequence of task-dependent actions such as moving forward, changing the orientation of the camera as well as object interaction. (Hermann et al., 2017) use the DeepMind Lab (Beattie, Leibo, Teplyashin, Ward, Wainwright, K\u00a8 uttler, Lefrancq, Green, Vald\u00b4 es, Sadik, et al., 2016) environment to define a language-guided navigation task where the agent has to reach an object indicated in a natural language instruction. (Gordon et al., 2018) present an interactive VQA task in AI2THOR (as shown in Figure 13) where the agent has to answer questions related to specific objects in the environment. The agent has to navigate in the environment to discover the objects and answer questions about them. Therefore the task involves both reasoning and navigation skills to be successfully solved. A navigation task for the CHALET environment, called LANI, has been proposed by (Misra et al., 2018a). In addition, they propose an instruction-following dataset called CHAI that requires the agent to execute a set of actions to complete a given \u201crecipe\u201d in the household domain. Figure 13: Example from the IQA dataset based on AI2THOR environment presented in (Gordon et al., 2018). The agent starts from an initial position in the environment and receives the question as well. It has to move in the environment in order to find the clues that will help it to answer the question. Another line of work is interested in studying navigation tasks in simulated environments with photo-realistic scenes. Several datasets involving photo-realistic environments have been proposed: (Mirowski et al., 2019) introduce StreetLearn as a simulated environment to foster research in perception, planning, memory and navigation. The Roomto-Room dataset (Anderson, Wu, Teney, Bruce, Johnson, S\u00a8 underhauf, Reid, Gould, & Van Den Hengel, 2018b) is another relevant benchmark for navigation tasks in which the agent receives an instruction that tells it where to go and the agent has to generate a sequence of actions to reach the destination. (Chen et al., 2019) propose another Google Street View-based environment associated with navigation instructions that should be followed by the agent to reach a given goal position in the world. The Room-across-Room dataset (Ku et al., 2020) was designed to study more fine-grained abilities of the agent to 20 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models ground the instruction in temporally relevant referents of a photo-realistic 3D environment. This dataset also includes information in different languages to study how people refer to objects in space in different cultures. The dataset TalkTheWalk (De Vries et al., 2018) comprises a cooperative task involving a guide and a tourist. The guide is aware of the map and has to give instructions to the tourist about how to reach a specific goal position in New York City. This dataset represents a very complex challenge for AI agents because it requires both navigation and communication skills. CerealBar (Suhr et al., 2019a) is another collaborative game situated in a 3D environment where a follower has to collect a set of cards (with specific symbols) following the instructions of another agent. The follower has a limited set of actions that are mostly related to navigating the environment and picking up cards. Another collaborative task that requires natural language communication is MeetUp (Ilinykh et al., 2019a). In this task, one agent has to follow instructions and ask for feedback to reach a room that another agent is aware of. The navigator has to provide correct spatial references so that the other agent can give instructions about where to go. A variant of this interactive navigation task has been proposed by (Thomason et al., 2019a), namely CVDN. It is a bigger dataset comprising 2050 human-human navigation dialogues of 7k navigation trajectories composed of question-answer exchanges across 83 houses. Figure 14: An example of a task to be solved in the ALFRED benchmark (Shridhar et al., 2020). The agent has to master several skills to succeed such as navigation, visual memory and object manipulation. Figure adapted from (Shridhar et al., 2020). Such navigation-oriented tasks have a very important role in developing embodied agents with sophisticated reasoning and memory skills (for a more comprehensive list of these 21 \fA.Suglia and I.Konstas and O.Lemon tasks refer to (Gu, Stefani, Wu, Thomason, & Wang, 2022)). However, learning grounded meaning only by moving in the environment is rather limiting. In this way, the agent is not able to experience the effect of specific actions on specific objects. This is a concept that is related to the theory of affordances (Gibson, 1977) and its importance for learning grounded meanings of objects (Glenberg & Kaschak, 2002). (Chevalier-Boisvert et al., 2018) propose a virtual environment with a teacher that emits synthetic language generated by a context-free grammar. They define a curriculum for the agent based on increasingly complex tasks involving navigation (e.g., \u201cnavigate to a specific object\u201d) and manipulation tasks (e.g., \u201copen the door on your left\u201d). The agent has to master preliminary levels to solve successive stages of the game. The game culminates with the BossLevel in which all the previous capabilities are required. Another example of a task where the agent has a richer action set is Collaborative Minecraft (Narayan-Chen et al., 2019) where an Architect has a goal structure that wants to build and has to communicate specific instructions to the Builder that has to move around coloured blocks to create the desired shape. In this case, the resulting dialogues are complex and rich in spatial referential expressions required to specify the position of given game pieces. An up-to-date version of this task is presented in the IGLU challenge (Kiseleva et al., 2022). Another similar task that presents a similar blocks manipulation task is BlocksWorld (Bisk et al., 2016). However, the set of actions that the agent is able to execute in these tasks is rather limited. The same is true for another interactive task that was proposed by (Abramson et al., 2020). They define Playroom, a 3D environment where a learning agent has to perform several tasks of different complexities such as Q&A and instruction following. Again the action set is limited to navigation actions (e.g., forward, back, etc.) and one manipulation action (grab). This downside makes them an unsuitable benchmark for learning the meaning of specific actions that have an effect on the environment. AI2Thor (Kolve et al., 2017) is the only 3D environment with rich simulated physics that models a variety of actions for the agent such as pick up, slice, toggle. Based on this 3D environment, the research community has proposed several challenges. For instance, ALFRED (Shridhar et al., 2020) is the first language-guided task completion benchmark that requires both navigation and manipulation actions. As shown in Figure 14, the agent has to complete a goal specified by a Natural Language instruction. In addition to the overall goal instruction, instructions are provided for the agent for every sub-goal. Therefore, it can be intended as an interactive language game where the follower only executes the actions required to complete the task. In ALFRED, every task is very complex, as it is composed of very long action sequences (average trajectory length is 40) compared to other navigation tasks (average trajectory length is around 6). Each trajectory can be divided into a sequence of sub-goals required to achieve the final goal. Manipulating sub-goals might involve irreversible actions (e.g., slicing an apple). It is therefore essential for the agent to learn that specific actions should not be executed on specific objects. Another set of interactive tasks is presented in the SILG (Zhong et al., 2021) where they integrate under the same framework grid-like worlds such as NetHack (K\u00a8 uttler, Nardelli, Miller, Raileanu, Selvatici, Grefenstette, & Rockt\u00a8 aschel, 2020) as well as 3D simulated environment like Touchdown (Chen et al., 2019). They all share the same symbolic representation of the scene that incredibly simplifies the visual scene that an agent is supposed to receive in input. For example, the Touchdown environment (Chen et al., 2019) scenes have been 22 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models converted to symbols by segmenting its panoramas into semantic grids indicating pixels that belong to specific classes (e.g., sky, bicycle, etc.). This makes the visual grounding problem even harder considering that the model is not able to learn from a variety of signals coming from real-world scenes. ALFWorld (Shridhar et al., 2021) is another embodied AI dataset that benefits from language feedback and annotations that are automatically generated using the same game engine used for text adventure games. In this work, they stress the importance of teaching AI agents to \u201cimagine\u201d the sequence of instructions that can be executed to achieve a goal, and decouple this from the actual plan execution which depends on the underlying environment. Thanks to this new environment, they demonstrate that the agent can more efficiently learn language commands in text-world environments, and reliably generalise to unseen environments. Figure 15: An example of a task to be solved in the TEACh challenge (Padmakumar et al., 2021). The Commander has to support the Follower in completing the task. The Follower is the only agent that can actually execute actions in the environment. Figure adapted from (Padmakumar et al., 2021). In the ALFRED benchmark, it is assumed that the agent has perfect vision and is always able to find all the objects required to complete the task. Unfortunately, this is not realistic and the current setup does not allow the agent to resolve ambiguities or possible misunderstandings because there is not a real interaction between the agents. In this sense, promoting a symbiotic relationship (Rosenthal, Biswas, & Veloso, 2010) between the user and the agent, where both humans and artificial agents can solve tasks and mitigate each other\u2019s limitations is essential for language learning. Natural language offers the user a medium to flexibly express their desired outcomes and provide guidance to avoid task failure; but as we also often provide underspecified, ambiguous, or even incorrect instructions, the agent is challenged to understand known aspects of their environment and pose informative questions when uncertain, in order to minimise the risk of failure. To promote action coordination and object ambiguity resolution, (Gao et al., 2022) created Dialfred, a set of 53K question/answer pairs that can be used to resolve ambiguities in the ALFRED environment. To the idea of collaboration between agents even further, (Padmakumar et al., 2021) propose the TEACh challenge, a two-player game situated in the AI2Thor environment. Inspired by ALFRED, there are multiple tasks of different complexity to be solved. As shown in Figure 15, there are two players with different roles: Follower and Commander. The Follower executes the instructions provided by the Commander while the 23 \fA.Suglia and I.Konstas and O.Lemon Commander knows the task to be solved and provides assistance to the Follower without actually executing any actions. The ultimate challenge of this dataset is to train an agent that can support a human in completing daily tasks. Finally, to advance the state-of-the-art in Embodied Conversational AI, (Gao et al., 2023) proposed the Alexa Arena, a novel multi-room simulated environment that was used as the main benchmark for the Amazon Simbot challenge, a university competition to advance the state-of-the-art in Embodied Conversational AI (Shi, Ball, Thattai, Zhang, Hu, Gao, Shakiah, Gao, Padmakumar, Yang, et al., 2023). 4. Grounded Language Learning Models In this section, we will provide a critical survey and analysis of ML models that have been proposed to tackle some of the grounded language learning tasks reviewed in the previous sections. The search of the studies was focused on the topic of grounded language learning and was limited to the time period 2015-2022. We used Scopus 1 to complete the search, considering only published works in relevant Machine Learning or Natural Language Processing conferences. From the original search completed in 2019 we selected a total of 32 papers. Such papers were then extended with additional references published in major conferences in the field such as ACL, EMNLP, etc. Overall, we analysed a total of 50 studies according to specific criteria. These criteria have been selected because we consider them essential for artificial agents able to support the visually grounded interactive language games that we described in this work. The selected criteria are reported as follows: \u2022 Compositionality: is the representation of a complex entity the result of the combination of simpler ones learned by the model? \u2022 Dialogue: is the model able to support a conversation in natural language with another agent? \u2022 Architecture: is the proposed approach designed as an end-to-end neural architecture or as a modular architecture? \u2022 Visual representation: which kind of visual representation is used? \u2022 Reasoning: does the model support any kind of reasoning mechanism able to leverage the learned representations? \u2022 Concept representation: which kind of representations are learned and used by the system? \u2022 Few-shot learning: Is the model able to solve a task by having access to a few learning examples? \u2022 Real-world vision: is the model exposed to real-world images? \u2022 Natural Language: is the model exposed to Natural Language (i.e., English content generated by real users)? 1. https://www.scopus.com/ 24 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models A summary of the comparison is reported in Table 2. A more detailed explanation of the work is reported in the following section. From the completed literature review it emerged that in studying grounded language learning, often one of the two modalities (i.e., vision and language) is lacking or is not considered at all. Indeed, several of the analysed papers either use real-world vision or synthetic images while focusing on more complex language structures. Bridging this gap is really important in order to understand how well current neural models are able to handle complex visual scenes, as well as interesting Natural Language phenomena available in human language annotations. Running experiments in the real world would be ideal to simulate such complex visual scenes\u2014although this is not always practical. One way to cope with this limitation is by designing rich 3D environments that expose models to photo-realistic visual scenes. Instead, when analysing the language dimension, another simplification is that the tasks typically involve discriminative or generative tasks involving a single interaction. Only very few approaches consider interactive tasks where the dialogue history is actually relevant for the task at hand (Agarwal et al., 2020). In some cases, the developed models are actually able to learn a dialogue strategy to complete the task whereas others assume that the dialogue is somehow scripted in advance via a hand-crafted procedure. 4.1 Learned Multi-modal Representations We are interested in understanding the type of learned representations these systems are learning, and the way they are using them for specific grounded language learning tasks. 4.1.1 Visual representations A few approaches rely on low-level sensory features modelling RGB colour channels or shape information(Antunes et al., 2017; Patki & Howard, 2018). Based on the assumption that an image can be expressed as a bag of visual words (Csurka, Dance, Fan, Willamowski, & Bray, 2004; Grauman & Darrell, 2005; Sivic, Russell, Efros, Zisserman, & Freeman, 2005), several works (e.g., (Thomason et al., 2019b; Yu, Eshghi, & Lemon, 2017a)) use a set of visual classifiers that are manually mapped to specific classes that represent visual concepts. Due to the expressivity of deep neural networks, a common practice is to use latent features learned by a Convolutional Neural Network (CNN) trained on a large-scale image dataset (e.g., ImageNet (Deng et al., 2009)). Specifically, the features belonging to the last layer of a CNN, have been widely adopted due to their ability to be re-used for other tasks other than image classification. With such features, it is possible to represent an entire scene with a single distributed representation. However, as demonstrated in the VisDial challenge, using an object detector to predict object bounding boxes (e.g., FastRCNN (Anderson, He, Buehler, Teney, Johnson, Gould, & Zhang, 2018a)) really improves performance. Having such object-centric representations allows the agent to learn the task by reasoning over objects and their relationships which are potentially invariant across scenes. In this setup, the model receives in input a high-dimensional feature vector for every detected object (Region Of Interest (ROI) vector). Additionally, to provide the model with positional information about the objects, bounding box coordinates are encoded as additional features. To incorporate class information associated with each object, either class labels are encoded (Tan & 25 \fA.Suglia and I.Konstas and O.Lemon Paper Compositionality Dialogue Architecture type Vision module Concept Representation Extreme Generalisation Real world vision Natural Language (Agarwal, Gurumurthy, Sharma, Lewis, & Sycara, 2019) yes end-to-end pretrained VGG-net features Distributed yes yes (Antunes, Saponaro, Morse, Jamone, Santos-Victor, & Cangelosi, 2017) hand-crafted modular multiple SOM Distributed few yes synthetic (Bahdanau, Hill, Leike, Hughes, Hosseini, Kohli, & Grefenstette, 2018a) end-to-end CNN Distributed zero synthetic (Chou, Chen, Zeng, Hu, Fu, & Sun, 2018) end-to-end CNN Distributed yes yes (Patki & Howard, 2018) hand-crafted modular pipeline vision algorithms Set of features yes synthetic (Chrupa la, 2019) end-to-end pretrained VGG-net features Distributed yes yes (Yu, Zhang, & Xu, 2018b) yes end-to-end CNN Distributed zero synthetic (Shekhar, Venkatesh, Baumg\u00a8 artner, Bruni, Plank, Bernardi, & Fern\u00b4 andez, 2019) yes end-to-end pretrained VGG/ResNet152 Distributed zero yes yes (Pillai & Matuszek, 2018) modular RGB colour visual classifiers yes synthetic (Deng, Wu, Wu, Hu, Lyu, & Tan, 2018) yes end-to-end pretrained VGG distributed yes yes (Yi, Wu, Gan, Torralba, Kohli, & Tenenbaum, 2018) yes modular Mask-CNN Distributed, attribute-based zero yes (only one dataset) (Das, Datta, Gkioxari, Lee, Parikh, & Batra, 2018) modular CNN distributed 3D env synthetic (Williams, Gopalan, Rhee, & Tellex, 2018) yes modular lambda calculus yes yes (Chevalier-Boisvert et al., 2018) end-to-end CNN + FiLM distributed synthetic (Co-Reyes, Gupta, Sanjeev, Altieri, DeNero, Abbeel, & Levine, 2018) end-to-end distributed synthetic (Lu, Yang, Batra, & Parikh, 2018) end-to-end CNN, Mask CNN Distributed yes yes (Zhang, Yu, & Xu, 2018) yes end-to-end CNN Distributed few synthetic (Mordatch & Abbeel, 2018) yes multi-agent communication end-to-end (Shah, Fiser, Faust, Kew, & Hakkani-Tur, 2018) end-to-end CNN (depthmap and segmentation map) Distributed yes synthetic (Misra, Girshick, Fergus, Hebert, Gupta, & van der Maaten, 2018b) yes modular CNN resnet Distributed zero yes (only one dataset) (Kiela, Conneau, Jabri, & Nickel, 2018) end-to-end CNN Distributed yes yes (Kiros, Chan, & Hinton, 2018) end-to-end CNN Distributed yes yes (Becerra-Bonache, Christiansen, & Jim\u00b4 enez-L\u00b4 opez, 2018) yes Prolog-based prolog-representation hand-crafted Horn clauses (Kalyan, Lee, Kannan, & Batra, 2018) end-to-end CNN Distributed yes yes (Yu, Lian, Zhang, & Xu, 2018a) end-to-end CNN Distributed (Thomason, Padmakumar, Sinapov, Walker, Jiang, Yedidsion, Hart, Stone, & Mooney, 2019b) yes yes modular visual classifiers visual classifiers yes yes (Choi, Lazaridou, & de Freitas, 2018) yes end-to-end CNN distributed zero emerged language (Xie et al., 2018) end-to-end CNN or Mask-CNN distributed yes yes (Gordon et al., 2018) end-to-end CNN distributed 3D env synthetic (Kottur et al., 2019) yes yes end-to-end CNN distributed synthetic synthetic (Zhou, Palangi, Zhang, Hu, Corso, & Gao, 2020) end-to-end FastRCNN distributed both yes yes (Majumdar, Shrivastava, Lee, Anderson, Parikh, & Batra, 2020a) end-to-end CNN distributed yes yes (Zhang, Li, Hu, Yang, Zhang, Wang, Choi, & Gao, 2021) end-to-end CNN distributed yes (Pashevich, Schmid, & Sun, 2021) end-to-end CNN distributed 3D env yes (Min, Chaplot, Ravikumar, Bisk, & Salakhutdinov, 2021) modular hybrid hybrid 3D env yes (Magassouba, Sugiura, & Kawai, 2021) end-to-end CNN distributed 3D env/photorealistic yes (Tan & Bansal, 2019a) end-to-end FastRCNN distributed both yes yes (Zhang, Hu, Qiu, Shaw, & Sha, 2021) yes modular FastRCNN distributed both yes yes (Kuo, Katz, & Barbu, 2020) yes modular CNN distributed synthetic no (Jiang, Gu, Murphy, & Finn, 2019) yes no end-to-end CNN distributed both synthetic no (Zellers et al., 2021) modular attribute-representation distributed few 3D env yes (Zellers, Lu, Hessel, Yu, Park, Cao, Farhadi, & Choi, 2021) end-to-end CNN distributed yes yes (Singh, Deng, & Ahn, 2021) yes no end-to-end CNN distributed few synthetic no (Radford, Kim, Hallacy, Ramesh, Goh, Agarwal, Sastry, Askell, Mishkin, Clark, et al., 2021) no end-to-end CNN distributed both yes yes (Ramesh, Pavlov, Goh, Gray, Voss, Radford, Chen, & Sutskever, 2021) no end-to-end CNN distributed both yes yes (Lazaridou et al., 2018) no end-to-end CNN distributed few synthetic emerged language (Lazaridou, Chrupa la, Fern\u00b4 andez, & Baroni, 2016) no end-to-end CNN distributed yes yes (Lazaridou, Peysakhovich, & Baroni, 2016) no end-to-end CNN distributed yes emerged language (Lazaridou, Potapenko, & Tieleman, 2020) no end-to-end CNN distributed synthetic yes (Zhu, Neubig, & Bisk, 2021) no no end-to-end CNN distributed 3D env no Table 2: Comparison of several multimodal models developed for several grounded language learning tasks. 26 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models Bansal, 2019a) or a probability distribution over the object detector label set (Zhou et al., 2020). This provides the model with more fine-grained information associated with the scene as well as a better representation of the objects in it. This approach was used by several recent models (e.g., (Xie et al., 2018; Lu et al., 2018; Yi et al., 2018; Tan & Bansal, 2019a; Zhou et al., 2020)). However, extracting features from an object detector is a very time-consuming operation and limits the ability of the model to encode specific objects that are not part of the object detector training set. Therefore, some approaches revisited grid features generated by a CNN to improve the efficiency of multi-modal models (e.g., (Jiang, Misra, Rohrbach, Learned-Miller, & Chen, 2020; Kim, Son, & Kim, 2021)). 4.1.2 Language representations Language representations, on the other hand, are typically trained separately, either using the reference task dataset or by pretraining using self-supervised techniques. Such representations are either in the form of word embeddings (e.g., (Mikolov, Yih, & Zweig, 2013; Pennington, Socher, & Manning, 2014)) or contextual embeddings (e.g., (Devlin, Chang, Lee, & Toutanova, 2018)). The latter have had tremendous success in both Natural Language Understanding, and Natural Language Generation tasks (e.g., GPT-3 (Brown, Mann, Ryder, Subbiah, Kaplan, Dhariwal, Neelakantan, Shyam, Sastry, Askell, et al., 2020)). 4.1.3 Vision+Language representations Once we have a representation for each modality, how do we derive a multimodal representation? A multimodal representation is a representation of data that is obtained by combining multiple sources of input such as visual, perceptual or symbolic information (Baltru\u02c7 saitis, Ahuja, & Morency, 2018). Representing multiple modalities poses many difficulties regarding the way we combine heterogeneous sources and how to deal with noise in them (i.e. when they are just noisy or completely missing). Therefore, designing mechanisms to meaningfully represent such data becomes a crucial problem in multimodal representation learning. In recent work by (Baltru\u02c7 saitis et al., 2018), several multimodal fusion mechanisms have been reviewed: \u2022 early: the model learns multi-modal representations as part of the learning objective by jointly minimising a loss function that optimises a posterior involving both modalities; \u2022 middle: the model uses independent training data and objectives for specific modalities. A parameterised similarity function can be used to learn the relationships between features in multi-modal spaces; \u2022 late: it is the converse of the previous approach because it first computes the similarity scores between the representations and then it aggregates them together. As highlighted by (Kiela, 2017), a combination between early and middle fusion can be considered plausible from a cognitive perspective because \u201cit allows for learning unimodal representations independently but which also allows for combining said representations into an overall multimodal one that takes all modalities into account, possibly in varying degrees\u201d. 27 \fA.Suglia and I.Konstas and O.Lemon Figure 16: An overview of the VLP architecture presented by (Zhou et al., 2020). It uses a single Transformer stack to encode both visual tokens and textual tokens. Then, it is pretrained on Conceptual Captions using both masked language modelling and sequenceto-sequence loss. Figure adapted from (Zhou et al., 2020). For instance, early fusion models have been relatively successful in multimodal tasks thanks to the development of large-scale unimodal encoders. Concatenating, averaging or summing unimodal representations are some of the methods used to generate multimodal representations (e.g., (Shekhar et al., 2019; Das et al., 2018)). More powerful multimodal architectures have been proposed that use specific co-attention mechanisms (e.g., (Deng et al., 2018; Yu et al., 2018a; Lu et al., 2018)). More recently, Transformers have been used to learn multimodal representations by combining modality-specific inputs via selfand cross-attention. Such models are indeed closer to the early/middle fusion models identified by (Kiela, 2017). Transformer-based models have been particularly successful in most of the grounded language learning tasks that we reviewed. Following (Bugliarello, Cotterell, Okazaki, & Elliott, 2021), we can define two different types of multimodal architectures: 1) single-stream Transformers; and 2) dual-stream Transformers. Single-stream Transformers assume that there is a single Transformer stack that receives in input both visual and textual inputs. Dual-stream Transformers instead assume that each modality is encoded by a modalityspecific encoder whose outputs are fused by using a dedicated cross-modal encoder. This distinction was similarly described by (Baltru\u02c7 saitis et al., 2018) as joint encoder representations and coordinated encoder representations, respectively. However, (Bugliarello et al., 2021) demonstrates dual-stream attention functions act as restricted versions of the attention function in any single stream Transformer architecture. So dual-stream architectures have an inductive bias that favours interactions between modalities. Single-stream Multimodal Transformers One of the first single-stream multimodal models, together with many others (e.g., (Chen, Li, Yu, Kholy, Ahmed, Gan, Cheng, & Liu, 2020; Li, Yatskar, Yin, Hsieh, & Chang, 2019; Li, Yin, Li, Hu, Zhang, Zhang, Wang, 28 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models Hu, Dong, Wei, Choi, & Gao, 2020)), is VLP (Zhou et al., 2020). Its architecture is depicted in Figure 16. First of all, the architecture mimics the one used by BERT (Devlin et al., 2019a). Following BERT, it represents the language input as Wordpieces (Schuster & Nakajima, 2012). It introduces visual tokens which are derived from object-centric representations generated by FastRCNN. Such visual tokens are then projected to a dimensionality comparable to textual tokens. In this way, they can be passed in input to the stack of Transformer layers to generate multimodal representations of the input. Following the success of self-supervised learning for text-based Transformers, VLP is initialised with a pretrained BERT and then is pretrained using several multimodal losses on the Conceptual Captions dataset (Sharma, Ding, Goodman, & Soricut, 2018). In order to support both discriminative and generative tasks, VLP is trained with both masked language modelling loss as well as with a sequence-to-sequence loss (they are randomly alternated during training). In this way, it is one of the few models that can be used for both VQA and image captioning. Other models have also proposed additional self-supervised losses for the visual modality as well. For instance, UNITER (Chen et al., 2020) uses masked region modelling which consists in reconstructing the original visual tokens or predicting their class. By learning to predict missing objects, the model learns to model the visual context in a better way which can enhance language understanding. Another model that belongs to this family is FLAVA (Singh, Hu, Goswami, Couairon, Galuba, Rohrbach, & Kiela, 2022), which takes advantage of the advancements in Vision Transformers (Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, et al., 2020) to learn representations of the image directly from pixels without any object detector features. Both language and visual features are then fed into modality-specific Transformers that derive representations for each modality. Then, a single cross-modal Transformer derives multimodal representations. The main novelty of FLAVA is represented by its ability to perform both multimodal and unimodal tasks. In particular, it is pretrained with both multimodal and unimodal datasets so that it can simulate scenarios where only some modalities are available. Additionally, it can leverage the additional signal that is derived from both unimodal and multimodal task-specific losses. Dual-stream Multimodal Transformers In contrast with single-stream multimodal Transformers, many dual-stream transformers have been proposed in the literature. As highlighted before, these models have two dedicated modality-specific encoders that are used to encode language and vision separately. An additional cross-modal Transformer block is then used to \u201cfuse\u201d the hidden representations generated for the two modalities. One instance of these models is LXMERT (Tan & Bansal, 2019b) which has a \u201cObjectrelationship Encoder\u201d, a dedicated Transformer block that encodes object representations derived from a pretrained object detector, and a \u201cLanguage encoder\u201d implemented as a Transformer block that encodes language tokens. Additionally, a \u201cCross-modality Encoder\u201d, implemented as another Transformer block, is used to fuse the two modalities and favours the development of cross-modal representations that are used for the downstream tasks. This process is performed in a symmetric way. First, language representations are used as queries in the self-attention operations against the vision token representations. Then, vision token representations are used as queries in the multi-head self-attention operation. Concurrently, ViLBERT (Lu, Batra, Parikh, & Lee, 2019) defined an alternative way of 29 \fA.Suglia and I.Konstas and O.Lemon Figure 17: The co-attention layer implemented in ViLBERT to fuse vision and language representations. The co-attention layer relies on multi-head self-attention, and is applied in parallel to both vision and language representations to derive multimodal representations. Figure adapted from (Lu et al., 2019). using cross-attention to \u201cfuse\u201d the vision and language modalities. This is implemented as a two-step approach where, after using modality-specific encoders to generate latent representations for language and vision respectively, the model uses a \u2018co-attention\u2018 transformer layer (showed in Figure 17) for a target modality that performs multi-head self-attention using as queries the representations from the target modality, and as keys and values the representations associated with the other modality. 4.2 Compositionality Natural Languages are characterised by immense productivity meaning that they can generate an infinite number of expressions. According to linguists, this is possible thanks to the property of compositionality. Of specific interest is semantic compositionality, the principle whereby the meaning of a linguistic expression is a function of the meaning of its components and the rules used to combine them (Frege, 1892; Montague, 1970). In this paper, we intend a broader definition of compositionality that is related to the ability to combine simpler features to obtain novel objects/entities. This broader definition can be extended to compositionality in vision where an object can be seen in terms of its attributes (i.e., cat = has fur, has eyes, has 4 legs, etc.). When studying the generalisation power of artificial agents, it is fundamental to consider compositionality because it can be conjectured to be a landmark not only of language but of human thought in general (Fodor & Pylyshyn, 1988; Lake, Ullman, Tenenbaum, & Gershman, 2017). Following (Harnad, 1990), this broader definition of compositionality is the key ingredient of learning grounded representations. As 30 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models demonstrated in the GROLLA evaluation framework (Suglia, Konstas, Vanzo, Bastianelli, Elliott, Frank, & Lemon, 2020), learning grounded representations should result in high performance for in-domain examples as well as generalisation to examples never seen during training. Several papers take into account only some aspects of compositionality which are not fully integrated into the learned representation nor in the model reasoning mechanism. The work presented by (Choi et al., 2018) studies how, in a referential game, two agents evolve their linguistic skills to generate a compositional language. In particular, they develop a two-player image description game: the speaker and hearer receive two random images. The speaker needs to generate a sequence of characters to describe the image that they see. The hearer looks at its image and needs to decide if they are looking at the same image or not. They base their model on the Obverter model (Batali, 1998). The obverter technique motivates an agent to search over messages and generate the ones that maximise their own understanding. They study the properties of the language that evolved from the communication game. Additionally, they complete a zero-shot evaluation by holding out five objects from the dataset during the training and observing how agents describe them during the test phase. Discriminative features in this case are just colour and shape (with a strong bias towards the colour) so the task cannot tell to what extent these networks are able to compose the learned representations. (Mordatch & Abbeel, 2018) study the properties of the evolved communication protocol in a population of agents that can interact with each other via either verbal or non-verbal communication (i.e., pointing or guiding). The environment contains a fixed number of agents and landmarks. Every agent can either execute an action (i.e., go to location, look at location and do nothing) or utter a symbol. Every agent has a goal that is not revealed to the other agents requiring communication skills in order to be successful. Differently from these models that try to analyse to what extent neural networks are able to learn grounded compositional language, the work presented by (Thomason et al., 2019b; Kottur et al., 2019; Yi et al., 2018) includes logical form as a structural bias in their models. (Thomason et al., 2019b) uses a semantic parser\u2014separately trained from the agent\u2014to derive semantic representations of the language in the form of predicative structures. Powerful generalisation capabilities can be obtained when compositional representations are directly integrated in the model. For instance, (Yi et al., 2018) presents a model that learns to answer questions about images by \u201cexecuting\u201d a sequence of predefined functional programs. First, it learns to generate a rich scene graph for the reference image and, as a second step, generates a functional program for the given question that, once executed against the scene graph, returns the answer. (Kottur et al., 2019) uses CorefNMN (Kottur, Moura, Parikh, Batra, & Rohrbach, 2018) for the CLEVR-dialog task with an approach similar to (Yi et al., 2018). CorefNMN is a neural model equipped with a program decoder that generates functional programs given the dialogue history and the current question. Then, it derives the answer by executing the extracted functional programs. A specific program \u201cRefer\u201d can be used by the model to have access to a pool of previously mentioned entities in order to complete visual coreference resolution over the turns of the dialogue. Integrating these functional programs in the model guarantees a high level of generalisation 31 \fA.Suglia and I.Konstas and O.Lemon Figure 18: The Neural compositional module developed by (Bogin et al., 2021b). It uses a CKY-like algorithm to derive, in a bottom-up fashion, a compositional representation of the question that is then used, via its denotation, to generate the answer. Figure adapted from (Bogin et al., 2021b). because the model does not focus on learning spurious correlations between the dataset but it learns to effectively use the provided \u201cmodules\u201d to complete the task. Relying on ad-hoc symbolic representations to facilitate the generation of compositional representations was proposed first by (Socher, Lin, Ng, & Manning, 2011) and then revisited by (Bogin et al., 2021b). As shown in Figure 18, (Bogin et al., 2021b) use a CKY-based algorithm (Kasami, 1966; Younger, 1967) that starts by grounding the single word embeddings associated to the words in the question. After this step, in a bottom-up fashion, compositional representations are induced from sub-span latent representations. They use a weighted sum of sub-span representations to make the computation differentiable. Each sub-span weight is derived as an attention score that represents how suitable that span is for a given context. Similar in spirit, a denotation for each span is used to derive the meaning of the sub-expression, and ultimately to derive the answer. Following the work by (Andreas, Rohrbach, Darrell, & Klein, 2016), they have several hand-crafted modules for executing operations on the object set that the model learns to execute. This model seems to do very well on the synthetic dataset CLEVR (Johnson et al., 2017) but, when tested on questions written by humans, it generalises marginally better than a fully neural counterpart (e.g., MAC (Hudson & Manning, 2018)). This might be because some operators (e.g., computing the maximum value in a set) cannot be obtained by composing elements in the question, but instead, require higher-order reasoning. The other problem is that more sophisticated grounding is required in some cases. Their model only grounds specific phrases to objects. However, in some cases, more fine-grained properties or numbers have to be grounded which is something that the model doesn\u2019t support. The same ground-compose 32 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models approach was implemented in the work presented by (Zhang et al., 2021). They apply a syntactic parser on a caption to generate a symbolic representation that will guide the composition of grounded elements. First, specific noun phrases are grounded via dedicated modules to obtain a visually-grounded representation. Then, the symbolic representation is used to combine such grounded representations together. All of the analysed papers rely on distributed representations for the words that completely ignore the extralinguistic information associated to a concept and only capture information about the use of the word in context (Westera & Boleda, 2019). In fact, these models do not have a representation for the concept of cat that defines a cat as an animal with 4 legs, a nose, and coloured fur, etc. Even multi-modal models can partially incorporate in their latent representation information related to the colour and to specific shapes that can be associated with specific concepts. However, these models do not seem to be able to take into account more complex attributes that people learn about concepts. In this sense, a compositional representation of cat should be derived by integrating specific sensory-motor properties (e.g., visual, auditory, etc.) as well as situational properties (e.g., affordances) that uniquely characterise a cat. Additionally, because such learned representations are typically encoded in a single vector representation, it is harder to understand what the relevant factors of variations are, and how they have been combined together. For instance, (Gardenfors, 2014) in its Conceptual Spaces theory defines an object category as composed of domains, a set of convex regions in such domains, and information about mereonomic (part-whole) relations as well as relations between dimensions across different domains. It is very possible that distributed representations derived from CNNs are able, for example, to extract information about object colours, which resemble one possible domain identified in (Gardenfors, 2014)\u2019s theory. However, it is not clear to what extent distributed representations are expressed in terms of relational information. Such relational knowledge can be intended as the essence of compositionality because it would allow representations to be derived by composing simpler ones together. 4.2.1 Assessing compositionality Despite neural networks\u2019 impressive performance on several benchmarks, many researchers have advocated that their representations and reasoning mechanisms are still very brittle (Marcus, 2018). As stated in (Linzen, 2020), current Deep Learning models are evaluated using a protocol that does not assess their ability to generalise to out-of-distribution data. This is a capability akin to systematic generalisation, and is therefore a core property of compositionality. Previous work on visual question answering (VQA) measures generalisation to novel question-answer pairs generated for natural (Agrawal et al., 2017; Whitehead, Wu, Ji, Feris, & Saenko, 2021), or synthetic images (Bahdanau, Murty, Noukhovitch, Nguyen, de Vries, & Courville, 2018b; Johnson et al., 2017). Similarly, (Suglia et al., 2020), proposes an evaluation framework that augments a goal-oriented evaluation metric with two auxiliary tasks aimed at assessing systematic generalisation. Prior work has examined systematic generalisation for image captioning (Atzmon, Berant, Kezami, Globerson, & Chechik, 2016; Nikolaus, Abdou, Lamm, Aralikatte, & Elliott, 2019; Bugliarello & Elliott, 2021). (Ruis et al., 2020) examines compositionality in the context of language-guided action execution in grid world environments extending the framework developed by (Lake, 2019). 33 \fA.Suglia and I.Konstas and O.Lemon Systematic generalisation is not the only property that a compositional system can demonstrate. For instance, (Pantazopoulos, Suglia, & Eshghi, 2022) extends the multifaceted compositional evaluation framework presented by (Hupkes, Dankers, Mul, & Bruni, 2020) to image captioning. According to their definition, an image captioning model can be considered compositional if possesses 3 different properties: (1) systematicity: the ability to generalise to unseen combinations of concepts learned in isolation during training; (2) productivity: the capacity to extend predictions beyond the training observations; and (3) substitutivity: the robustness of predictions under synonym substitution. By analysing a model under the above-mentioned compositional frameworks, we can assess its ability to be robust to unseen examples. Therefore, concepts such as compositionality and robustness, are connected to the concept of visual grounding. In the context of visual question answering, there have been several other works that have tested the robustness of V+L models. For instance, (Hudson & Manning, 2019) proposes a VQA benchmark that augments the VQA accuracy with robustness measures such as plausibility, validity, and consistency. The work presented in (Thrush et al., 2022) assesses the ability of the model to discriminate correct (image, caption) pairs from foil ones that are obtained by rearranging nouns and adjectives. CrossVQA (Akula, Changpinyo, Gong, Sharma, Zhu, & Soricut, 2021) presents a collection of test splits for assessing VQA generalisation in a variety of visual domains spanning from photographs (e.g., VQA v2) to real-world images taken by visually impaired users (e.g., VizWiz (Gurari, Li, Stangl, Guo, Lin, Grauman, Luo, & Bigham, 2018)). 4.3 The importance of Dialogue for Language Learning We argue that Interactive Grounded Language Learning is a plausible paradigm for training embodied agents to understand and use Natural Language. However, few of the papers that we analysed try to solve a task involving a multi-turn conversation between two agents. Additionally, the analysed papers can be further categorised based on the degree of interaction that they implement. 4.3.1 Scripted dialogue policy Dialogue can be scripted such as in the work by (Patki & Howard, 2018) and (Antunes et al., 2017). In this case, the dialogue policy follows a predefined script that allows the model to complete the reference task. A modification of this setup involves having dedicated dialogue managers to learn when to learn (Yu et al., 2017a; Thomason et al., 2019b). Specifically, the dialogue policy relies on specific intents to detect when the model should update its internal representation as a result of an error in classification. 4.3.2 Learned dialogue policy A more relevant setup for this work is one where the agent is supposed to learn when and what to generate directly from the data. For instance, (Agarwal et al., 2019) propose a collaborative game where two bots are present. One of them \u2013 A-bot \u2013 receives an image and a caption. The other \u2013 the Q-bot \u2013 receives only the caption. Given the caption, it has to generate the image after asking the other bot some questions about it. In this way, it can create a representation that is suitable for retrieving, ranking or generating that image. 34 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models In this way, no human supervision is required and a reward can be defined in terms of how close to the target the generated representation is (based on Euclidean distance). To tackle the GuessWhat?! dataset (see Section 3.3), (De Vries et al., 2017) propose to train separate models for the 3 different tasks that need to be solved namely question generation, target object guessing and answer generation. (Shekhar et al., 2019) propose an end-to-end neural model that uses a multi-task learning approach to jointly train the question generation and the guesser module. However, their model treats the image as a single feature vector generated by a pretrained CNN ignoring more fine-grained information present in real-world scenes. An approach that tries to more directly integrate the image features in the process of discovering the target object was presented in (Deng et al., 2018). They propose a model that implements an iterative attention process that tries to co-ground the query feature to object-centric features associated with the image. However, they take into account only the guessing problem reducing its applicability to the full set of tasks that has to be solved in GuessWhat?!. A similar setup is also described in visual dialogue (e.g., (Kottur et al., 2019; Das, Kottur, Gupta, Singh, Yadav, Moura, Parikh, & Batra, 2017b)). For instance, CLEVR-Dialog (Kottur et al., 2019) has been defined to study visual coreference resolution in a multi-turn dialogue involving two agents. However, their main interest is to define models that are able to correctly track the state of the conversation rather than actually generating the questions/answers. 4.3.3 Multi-turn interaction Another important element is the relevance of multi-turn interaction and what is the granularity of the learning task. Specifically, some papers consider only single-turn interactions while others assume that the interaction can last up to N steps. Papers studying Natural Language evolution (e.g., (Lazaridou et al., 2020, 2016; Choi et al., 2018; Havrylov & Titov, 2017)) can be considered as instances of a simplified dialogue namely a signalling game (Lewis, 1969). Here, they limit the interaction to single-turn exchanges where an agent (Speaker) generates a message and the other (Hearer) has to select the target image among a set of distractors. For instance, (Lazaridou et al., 2018) studies the communication protocol developed by agents equipped with different types of visual encoders. In these studies, they are interested in the type of language developed by the agents as a result of the task completion objective. For instance, a system that tries to learn word meanings in an interactive fashion was presented by (Zhang et al., 2018). They present a synthetic task where an agent needs to learn the mapping between an image of an object, and a phrase, as shown in Figure 19. The images depict small-sized animals. They trained the model in a one-shot learning regime where the model learns to classify a test instance by relying on a single training instance of that class. To implement their model, they rely on a multimodal variant of the Neural Turing Machine read-write memory mechanism (Graves, Wayne, & Danihelka, 2014). Specifically, it learns to associate a specific word (e.g., \u201ccat\u201d) with a corresponding visual representation (e.g., \u201ccat image\u201d). In this paper, the dialogue involves learning multiple object categories at the same time. A single turn is required to learn the association between an image and a given category. Furthermore, the language used was synthetically generated by a 35 \fA.Suglia and I.Konstas and O.Lemon grammar and therefore lacks the diversity of Natural Language. Despite these limitations, they tackle an important problem in the Machine Learning community namely Few-shot Learning (Wang & Yao, 2019): the agent needs to learn to maximise the performance P for a task T using a limited amount of experience E about it. In addition, they use a training strategy for their agent that combines both Imitation Learning and Reinforcement Learning. In particular, they assume that early in the training the agent is incapable of generating meaningful responses so it imitates what the teacher says. Later on in the training, it starts generating more interesting responses so they use Reinforcement Learning to provide a more precise signal about the quality of the generated response. This will give the agent the chance to learn to \u201cbabble\u201d and to correctly solve the conversational game. Figure 19: Model presented by (Zhang et al., 2018). The Speaker learns to mimic the Teacher by generating a statement associated with the images available. The Speaker has to learn animal concepts (and fruits) in a one-shot learning regime. After a single example of the class is shown to the Speaker by the Teacher, they are asked to classify an image associated with the same concept. A model able to support a conversation composed of multiple turns can automatically generate extra training data from a very small bootstrap set. For instance, the work presented by (Misra et al., 2018b) describes a learning by asking paradigm where the model improves its VQA accuracy by learning to ask relevant questions about a given image. In this way, the model is able to retrieve data points that go beyond the training set and therefore learn better representations for the downstream task. Depending on the task, it is debatable whether dialogue history plays an important role in the decision-making strategy of the learning agent. Particularly, tasks requiring singleturn interaction do not model the history in the first place. Additionally, even tasks based on multi-turn interaction do not necessarily require dialogue history. For instance, VisDial (Das et al., 2017b) is designed as a dialogue task but the questions that the humans develop can be answered by simply looking at the image. (Agarwal et al., 2020) show that models having sophisticated history encoding mechanisms bring very marginal improvements due to the fact that only a portion of the data actually requires the dialogue context. Therefore, 36 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models special attention should be paid to the task design to make sure that dialogue history actually matters. 4.4 Embodiment & Grounding In the analysed papers a common trend emerges. In most of the studies, the reference for grounding is an image that represents the context of a situated interaction. As discussed in the previous section about action execution tasks (see Section 3.3), 3D simulators and virtual environments have been used to study the problem of grounding in visually rich contexts. (Yu et al., 2018a) propose Guided Feature Transformation which is a grounded language model extending classical CNN with an end-to-end differentiable component able to model the interactions between language and vision. They compute a recursive relationship where the input representation generated by the CNN is refined after J steps according to the words in the input command. They design several navigation tasks, as shown in Figure 20, in a 2D/3D environment: 1) nav: navigate to a specific object; 2) nav nr: navigate to an object near the specified one; 3) nav bw: navigate to a location between two objects; 4) nav avoid: navigate to any object but the specified one; and 5) nav dir: navigate to an object specified by a relative direction w.r.t. another object. Figure 20: Examples of navigation tasks solved in the work presented by (Yu et al., 2018a). (Chevalier-Boisvert et al., 2018) propose a first model for the BabyAI environment. They use a GRU module to model the language instruction and a FiLM (Perez, Strub, De Vries, Dumoulin, & Courville, 2018) for the vision module which learns to condition the visual representation on the question representation. The agent accuracy on these tasks seems to be very high (nearly 100%) posing some doubts related to the complexity of the world and of the language expressions used. Instead, (Bahdanau et al., 2018a) describes a learning framework for instruction-conditional RL agents having to execute instructions in a grid world. In their RL setup, the agent learns from a reward that is derived from a reward model trained from expert demonstrations. The proposed model is able to learn a reward model that performs well from a limited amount of data. Specifically, with a training set of only 8000 examples, the agent could reach a performance of 60% that can be improved to reach optimal performance when a set of 100,000 examples is used to train the model. Motivated by the desire to create artificial agents that can follow instructions in domestic environments, there have also been several efforts transitioning from grid-world setups to benchmarks for language-guided navigation in indoor 3D environments with photo-realistic images. Several models have been proposed in the literature to solve this problem. For instance, several Transformer-based models have been designed for this task (e.g., (Majum37 \fA.Suglia and I.Konstas and O.Lemon Figure 21: Architecture of the Episodic Transformer model presented by (Pashevich et al., 2021). It jointly encodes language instruction, camera observations and previous actions using a dedicated multimodal encoder. The hidden state representation of the last state is then used to predict the action and the object class. Figure adapted from (Pashevich et al., 2021). dar, Shrivastava, Lee, Anderson, Parikh, & Batra, 2020b; Magassouba et al., 2021; Hong, Wu, Qi, Rodriguez-Opazo, & Gould, 2020)). An important ingredient for all of these models is a careful pretraining design. Specifically, to deal with the complexity of the language instructions, language-only pretrained weights are used (e.g., BERT (Devlin, Chang, Lee, & Toutanova, 2019b)). Then, large-scale image-text pairs from the Web are leveraged to train the model. Using such additional pretraining data helps finetuning the cross-modal encoder that will be used for the downstream task evaluation in a photo-realistic environment such as Matterport3D (Chang et al., 2017). Several projects have tried to investigate visual question-answering capabilities in 3D environments. For instance, the work presented by (Gordon et al., 2018) introduces the IQA dataset that requires the agent to move in the environment in order to answer the questions. It extends VQA in several ways: 1) navigation in the environment; 2) understanding of the environment (object, actions, affordances); and 3) executing actions conditioned on the question. The main question types are 1) counting; 2) existence; and 3) spatial relationships. They present Hierarchical Interactive Memory Network (HIMN), a Deep Learning model that incorporates a hierarchy of modules: a Planner intended as the decision-making component responsible for activating several task-specific modules. The designed sub-modules are 1) navigator; 2) scanner; 3) detector; 4) manipulator; and 5) answerer. HIMN is unable to differentiate between an object being inside a container or on top of the container. From the language perspective, the questions are scripted and the language is relatively simple, a downside whose solution is considered as future work by the authors. With the aim of solving the ALFRED benchmark (Shridhar et al., 2020), several models have been proposed. The proposed solutions are very specific to the task at hand. In the work presented by (Pashevich et al., 2021), they propose a Transformer-based architecture 38 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models Figure 22: Overview of the FILM architecture for language-guided task completion in ALFRED. They developed a modular architecture composed of several pretrained components that are connected together to solve specific tasks. The goal instruction is the only linguistic information that is available. From it, an execution plan is derived where a semantic search policy looks for the relevant objects in the scene, while a deterministic policy actually executes the relevant actions. Figure adapted from (Min et al., 2021). that encodes the CNN representation associated with each timestep, together with the language embeddings as well as the previous actions (see Figure 21). In this way, they can use the hidden state representations to generate actions conditioned on the context. Specifically, they use the hidden state of the last trajectory timestep, to predict the action to be executed as well as a possible object class that the agent should manipulate. The limitation of such an approach is that their architecture is specifically designed and trained only for ALFRED. For instance, for manipulation actions, they assume that only a single instance of a given object class is available in the manipulation area (e.g., the counter will contain only one \u201capple\u201d so no ambiguity is present). Therefore, it is possible to use an external object detector to detect the objects and then just select the one that we want to manipulate based on the class predicted by the model. This forces the model to reason only in terms of which actions can manipulate certain classes but it doesn\u2019t allow the model to have a good understanding of the actual objects that the agent has in its front view because its visual representation is not fine-grained enough. A more generic approach was proposed by (Suglia, Gao, Thomason, Thattai, & Sukhatme, 2021) who designed a multi-modal Transformer able to model the fine-grained, and multi-modal ALFRED trajectories represented in terms of detected objects in the visual scene. As highlighted by (Storks, Gao, Thattai, & Tur, 2021), navigating towards the target object is the hardest task in ALFRED because it can easily put the agent in a situation that the agent cannot recover from. Additionally, the navigation problem is exacerbated by the training regime used to train such models. Their policy is trained using supervised learning, mimicking human trajectories. However, at training time, agents do not have the ability to explore the environment to gain a better understanding of the objects and their location in the environment which might be important to successfully solve the tasks. Another possible solution to this problem is to avoid solving the problem in an end-to-end fashion, and instead divide it into multiple modules. For instance, as shown in Figure 22, 39 \fA.Suglia and I.Konstas and O.Lemon modular architectures such as FILM (Min et al., 2021) can achieve better performance for this task. First, a dedicated module derives a semantic representation from the goal instruction. This will represent a high-level plan that the agent will have to execute. Specific navigation modules, paired with a map of the environment, are then used to navigate towards target objects. Despite its effectiveness compared to concurrent approaches 2, its utility is debatable for models that are able to learn grounded representations that are useful for multiple tasks as well as domains. Indeed, such a model relies on specific algorithms that are used at inference time to complete the task. At the same time, it is important to understand that future models should have a way to derive a map of the environment in which they are located if they want to solve increasingly complex tasks with complete autonomy. 5. Discussion In this section, we provide a summary of our study of the literature on visually grounded language learning. As a result of our analysis, we derive several research questions that will be detailed in the following subsections. To answer these questions, novel evaluation benchmarks that are more suitable for grounded language learning as well as novel modelling solutions will be required. This survey represents an important reference for studying how to extend both models and related benchmarks to push the boundaries of grounded language learning. Finally, we compare our analysis with previous surveys in the field of V+L, and describe the importance of our categorisation to further study the field of V+L in the future. 5.1 How does this survey relate to other V+L surveys? The main focus of this survey is to provide a systematic categorisation of several tasks that have been explored in the V+L community. In particular, we provide a categorisation based on 3 broad categories namely discriminative games, generative games, and interactive games. This aims to provide a higher-level perspective of the different tasks that we believe will help researchers in sharing knowledge and in generalising results that have been produced by a specific task but that could be applied to another one as well. For instance, (Ferraro, Mostafazadeh, Huang, Vanderwende, Devlin, Galley, & Mitchell, 2015) presents a survey of image captioning datasets that dates back only to 2015 which can be categorised in the generative games category of our survey. The work from (Li, Zhang, Zhang, Liu, Guo, Ni, Zhang, & Zhang, 2022) presents a very detailed analysis of recent Transformer-based architectures that focuses on discriminative and generative games only ignoring the importance of interactive and embodied games for grounded language learning. A recent survey presented by (Gu et al., 2022) underlines the importance of embodiment and interactive learning. They analyse the field of Vision+Language navigation in great detail. However, as specified in our description of embodied interactive games, visual navigation ignores an important part of learning which is object interaction. (Mogadala, Kalimuthu, & Klakow, 2021) presents a very broad survey of general vision and language research up to the year 2019, whereas this paper provides a more up-to-date survey and analysis focused on Interactive Grounded Language Learning tasks. We consider Interactive Grounded Language 2. FILM was 2nd in the ALFRED leaderboard as of 5th May 2022. 40 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models Learning a new avenue for multimodal machine learning research. In particular, we argue for the importance of bringing together two relevant problems such as \u201csymbol grounding\u201d and \u201cconversational grounding\u201d, which we believe should be studied together to build flexible learning agents that can smoothly adapt to different speakers and domains using Natural Language. Finally, we discuss the relationship between our survey and surveys which provide a broader view of artificial agents able to learn language. For instance, (Schlangen, 2019b) uses the notion of language games for describing Natural Language Processing tasks. This survey identifies the task of \u201cunrestricted situated language interaction\u201d as \u201ca natural supremum\u201d in NLP tasks which is akin to the interactive language games that we consider in this work. In our survey, as first defined in (Bisk, Holtzman, Thomason, Andreas, Bengio, Chai, Lapata, Lazaridou, May, Nisnevich, et al., 2020b), we underline the importance of considering NLP tasks that require that the agent can \u201cexperience\u201d the result of its actions in the world as well as communicate and collaborate with other agents uncovering the social role of Natural Language. 5.2 How can language games be used for Visually Grounded Representation Learning? In this survey, we have analysed several tasks as well as associated models that have been proposed to solve them. Each of them has a specific focus of interest in modelling visual scenes as well as language. Depending on the way the output was generated, we divided them into three types: discriminative, generative, and interactive tasks. We argue that interactive tasks are the most important for future work because they are closer to the language games scenario that we have described. Among them, we consider visual guessing games an important task for learning grounded representations of objects. Objects in an environment, and their attributes, are the first step in learning grounded representations. Other interactive language games are embodied ones. Particularly, we consider language-conditioned action execution in a 3D world as the most relevant benchmark for learning other important elements of grounded representations, such as actions and events. In particular, it will be important to consider new modelling techniques that facilitate the development of agents that can negotiate and coordinate plans of actions in the world. 5.3 What type of representations can be learned via language games? When investigating the quality of distributed representations, it is possible to define several desiderata for distributed representations (Bengio, Courville, & Vincent, 2013). This is an important problem, especially for symbol grounding where we want to learn representations that are compositions of elements that are in turn grounded in the real world (Harnad, 1990). As previously stated, if we focus only on increasing the performance on the downstream task which is evaluated based on a held-out test set, we might have an opaque understanding of the real system capabilities. Most datasets and models assume this scenario where the model is never confronted with novel object categories. Its generalisation abilities are assessed only via novel instances of known categories. Therefore, we want to make sure to investigate language games that are complex enough to let the agent learn grounded representations. A representation can be considered grounded when it is a composition of 41 \fA.Suglia and I.Konstas and O.Lemon other grounded elements. Therefore, to assess this ability of the model, we should consider tasks involving zero-shot learning scenarios where novel categories are involved. In this case, the only way for the agent to solve the task is to use what it has learned during training in novel ways, therefore demonstrating its ability to systematically generalise. To mitigate this issue, (Suglia et al., 2020) describes a novel multi-task evaluation framework that extends the traditional evaluation framework for grounded language learning to give value to the quality of the learned representations. Thanks to this evaluation framework, the authors have shown several limitations of current multimodal models used for situated visual guessing games. Therefore, it is very important that future benchmarks address this issue by designing evaluation setups that assess the model generalisation capabilities in a more systematic way (Linzen, 2020). 5.4 Are learned multimodal representations generic enough to be transferred to other tasks? From the works described in Section 4, a common trend in the V+L literature is to propose a model architecture having inductive biases that can be applied to a specific task only. However, it is important to underline that another property of high-quality distributed representations is their suitability for other tasks different from the one they have been originally trained on. Concretely, this implies that language games should favour the emergence of important properties of the real world that can be captured by the model. If such knowledge is generic enough, the agent should be able to reuse it for solving other tasks as well. For instance, (Suglia, Bisk, Konstas, Vergari, Bastianelli, Vanzo, & Lemon, 2021) argue that visual guessing games can be developed as a generic transfer learning procedure for grounded language learning. This stresses the idea that more general architectures able to solve multiple tasks are required (Brown et al., 2020). When considering the generalisation power of the developed models, it is also important to study the ability of these multimodal models to be applied to scenarios where not all the modalities are involved. For instance, it is still not clear whether current instances of multimodal architectures are able to bring an improvement in text-only scenarios such as Question Answering, Natural Language Inference, and so on. Several recent works have tried to address this issue (e.g., Vokenization (Tan & Bansal, 2020)) but they are not generic enough to be easily transferred to other tasks because they rely on additional components that are not part of the same trained architecture. It is important that future work investigates truly multimodal models that can rely on multimodal modalities to create richer semantic representations that can be reused in scenarios that are different from the one they were originally trained on. 5.5 What representations are useful for action-driven and event-driven language games? Among the grounded language learning tasks in the literature, visual guessing games represent the first step in a curriculum for learning object categories and their attributes. However, objects are only one part of the spectrum required for understanding. As argued in (McClelland, Hill, Rudolph, Baldridge, & Sch\u00a8 utze, 2020), the main target of language is the understanding of situations (Barwise & Perry, 1981). Situations are highly relational 42 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models constructs composed of entities and relationships between them. Quoting (McClelland et al., 2020): \u201cTo understand a situation is to construct a representation of it that captures aspects of the participating objects, their properties, relationships and interactions, and resulting outcomes\u201d. This resonates with the idea that even the understanding of nouns (i.e., entities) requires composing affordances of them (Glenberg & Kaschak, 2002). Following the idea of grounded cognition (Barsalou, 2008), embodied experiences of concepts and events are required for the development of meanings. Thanks to the development of 3D environments, it is now possible to study embodied tasks and their relationship with grounded cognition. We have analysed several interactive tasks that study the problem of completing a task guided by Natural Language. First, language-guided navigation was proposed as a benchmark for embodied AI. Equipping the agent only with navigation actions is limiting its ability to learn grounded meanings. Therefore, several other benchmarks have been proposed to study more complex task-completion tasks involving manipulating objects as well. These environments represent an important avenue for embodied AI. Particularly, a general embodied AI architecture should be able to encode arbitrary long action sequences and make decisions upon them. However, simulated environments are limited by the capabilities of the physics engine, the object repertoire and the number of scenes available. Another interesting perspective for a learning agent is represented by videos in the third-person view (i.e., exocentric view). Learning from such multimodal inputs can expose the learning agents to many more situations that are useful in real-world contexts. 5.6 How does Symbol Grounding relate to Conversational Grounding? As we highlighted in the introduction, interaction with the world and with other agents are key ingredients for language learning. When playing language games, speakers coordinate among themselves to refer to things in the world and achieve goals. Additionally, the ability to engage in conversation allows agents to be more robust in situations of uncertainty. Many recent works have indicated the importance of conversational grounding (Brennan & Clark, 1996) as foundational for the future of conversational systems. More importantly, many recent position papers have convincingly argued the importance of connecting symbols and conversational grounding to create truly collaborative systems (Liu, Fang, She, & Chai, 2013; Chai, Fang, Liu, & She, 2016; Schlangen, 2019a; Benotti & Blackburn, 2021; Chandu et al., 2021). It is therefore important to define new tasks where symbol grounding and conversational grounding together have the role of first-class citizens, and where communicating agents have divergent information which they need to coordinate (Lemon, 2022). The need for divergence is essential because it is what motivates agents to communicate to achieve specific goals. Additionally, this idea of divergence is related to the concept of Theory of Mind (Astington & Baird, 2005) which is used in this context to refer to social understanding. Following (Tomasello, 2010), social understanding interacts with language communication in two ways depending on the type of social understanding desired. In particular, understanding the goals and perceptions of an agent will engage different language activities than the ones demonstrated when trying to understand one\u2019s desires and beliefs. Studying the interplay between symbol and conversational grounding in a systematic way will require dedicated data collection processes aimed at collecting data in realistic 43 \fA.Suglia and I.Konstas and O.Lemon embodied environments from which it is possible to elicit both symbol and communicative grounding phenomena that a model can learn from. These phenomena require additional methods to model both the conversational context as well as to derive relevant signals from the language feedback provided by other agents. It is important to consider a learning scenario that is closer to a real-world scenario involving tasks providing an ecologically valid scenario (De Vries et al., 2020) for studying multimodal conversational grounding. 6. Future Research Directions In this section, we will describe potential interesting research directions that we argue are worth exploring as a result of this literature review. These research directions are important for the development of truly generalist agents that can perceive, reason, and act in novel scenarios. 6.1 Perceptual symbols Language is a means of communication, which has evolved by transmitting symbols grounded in perceptual experience (e.g., gestures, sounds, etc.). However, as described in Section 4, current AI models learn a language only from symbolic representations derived from a tokenizer that are: 1) specific for a given language and are costly to transfer to low-resource languages, so limiting people\u2019s to access technology; 2) sensitive to noise (e.g., spelling mistakes); 3) hand-crafted because they do not represent language input in a multimodal way, as humans do. Thanks to recent innovations in Computer Vision, researchers have attempted to render language data as images, and use new Deep Learning architectures for Computer Vision to attempt this task. This solution was first proposed for Machine Translation by (Salesky, Etter, & Post, 2021) who use a CNN architecture to derive visual representations of the input text that was rendered as an image. From these representations, the authors were able to translate to a target language without the need for an input tokenizer. (Rust, Lotz, Bugliarello, Salesky, de Lhoneux, & Elliott, 2022) generalised this approach by proposing PIXEL, the first language model pre-trained with rendered images only. This model is pretrained using a self-supervised reconstruction loss that uses the image patches to learn latent representations of the language. In their paper, the authors demonstrate comparable performance with BERT on several tasks such as POS tagging and Natural Language inference. Most importantly, they show superior performance for very low-resource languages as well as languages that do not have a Latin script. Deriving language representations from the audio signal is another recent trend that goes under the name of textless NLP (Kharitonov, Copet, Lakhotia, Nguyen, Tomasello, Lee, Elkahky, Hsu, Mohamed, Dupoux, et al., 2022). For instance, the work presented in (Harwath & Glass, 2017) demonstrates how to derive word-like representation units directly from visual and audio features. Additionally, there is a very interesting line of work interested in learning representations from vision, language, and audio data (e.g., (Zellers, Lu, Lu, Yu, Zhao, Salehi, Kusupati, Hessel, Farhadi, & Choi, 2022)). However, this approach is still constrained by the use of a tokenizer which limits applicability to specific language scripts. 44 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models Deriving language representations from multimodal inputs is essential to achieve more realistic NLP systems that are more widely applicable such as Spoken Language Understanding systems. Second, spoken language carries many nuances (prosody, irony, anger, etc.) and expressive vocalisations (laughter, yawning, etc) that are not captured by text. Additionally, modelling language via the visual modality can allow the implementation of more sophisticated agents that are able to understand the content of websites, forms, and other media that contain multimodal content. In general, modelling language directly from other perceptual modalities has the potential to make AI applications more natural and expressive. To advance the state of the art in Multimodal Representation Learning, we should aim to design perceptual symbol systems whose aim is to derive language representations from multimodal perceptual experiences in a process akin to the seminal work from (Barsalou, 1999). 6.2 Multimodal robustness To date, many large-scale V+L pretrained models have been proposed, and they are typically only used for V+L tasks. However, there are very few works (e.g., (Tan & Bansal, 2020; Singh et al., 2022)) that demonstrate the ability of such models to bring a performance gain when it comes to single-modality tasks (e.g., vision-only or language-only). Therefore, we believe that the new generation of V+L models should be able to effectively encode the input modalities as well as be robust to perturbations of them (i.e., cases where such are potentially missing or are noisy). For instance, the work by (Ma, Ren, Zhao, Testuggine, & Peng, 2022) shows how multimodal models receiving noisy or missing modalities achieve performance that is inferior or just on par with unimodal models. Therefore, another important ingredient for the implementation of a sound and effective perceptual symbol systems is the ability to encode and reconstruct modalities when these are missing or noisy. Crucially, it is important to investigate the robustness of multimodal representations in general. Much recent literature has tackled this problem from the point of view of domain generalisation (e.g., (Akula et al., 2021)) as well as the effect of spurious correlations and noise (e.g., (Tu, Lalwani, Gella, & He, 2020)). This calls for a more systematic empirical investigation of the effect of such shifts in distribution and how they affect models across multiple tasks. For this reason, it is important to develop experimental frameworks that diagnose what matters in learning representations for V+L tasks (e.g., (Zhu, Qi, Narayana, Sone, Basu, Wang, Wu, Eckstein, & Wang, 2022; Akula, Gella, Al-Onaizan, Zhu, & Reddy, 2020)). 6.3 Interaction with the world and with other agents As we described in Section 5.6, future V+L agents need to develop the ability to interact with the world around them. This is important because the ultimate goal is to create AI systems that can communicate with humans in Natural Language and develop a seamless symbiotic relationship with them. The current wave of Generative AI models has demonstrated that having conversational skills is essential for user-facing systems such as ChatGPT (OpenAI, 2022). In this scenario, we will assume that the agent is embodied in an environment (either simulated or realistic) and that there are one or multiple agents in it. The agent, motivated 45 \fA.Suglia and I.Konstas and O.Lemon by a goal g, will have to produce a sequence of actions that will achieve their goal. There could be two types of actions that the agent can perform: physical actions and verbal actions. Physical actions involve a change in the agent\u2019s position or a change in the environment around them. In both cases, the agent will be able to observe the results of their actions and derive signals from these changes that can be used for learning purposes. Verbal actions instead are intended as utterances that are formulated when engaging in conversations with other agents. This is required to acquire information and resolve possible ambiguities arising in the process of conversational grounding. Additionally, we also envision other types of verbal actions that involve inner monologue (Vygotsky, 2012). This form of self-referential dialogue could be very effective if intended as a form of planning that allows the agent to foresee the outcome of certain actions, and recover from possible problems. This approach has already been implemented in some form in text-only language models via chain of thought (Wei, Wang, Schuurmans, Bosma, Chi, Le, & Zhou, 2022; Yao, Zhao, Yu, Du, Shafran, Narasimhan, & Cao, 2022) as well as using text-only models for planning in 3D simulated environments (e.g., (Huang, Xia, Xiao, Chan, Liang, Florence, Zeng, Tompson, Mordatch, Chebotar, et al., 2022)). 7. Conclusions In this work, we surveyed datasets and tasks proposed to study the problem of visual symbol grounding in multimodal machine learning models. Following the language games proposed in the Talking Heads experiments (Steels, 2015), we proposed a categorisation of these tasks into 3 main categories: discriminative, generative, and interactive language games. We argue that interactive games represent the most relevant class of language games to study the problem of grounded language learning. Additionally, we studied the properties and the skills required for solving the proposed language games. We find that language games situated in 3D environments enabling navigation and object manipulation represent an important step forward. In this way, artificial agents can learn object reference in the world (i.e., required when navigating in an environment), as well as the effects of certain actions on certain objects (i.e., observed when manipulating objects) which are essential for the development of rich grounded meaning representations. After categorising all the available tasks, we analysed the multimodal machine learning models that have been proposed to solve them. We completed an analysis based on several key characteristics that these models have. We find that in order to support interactive language games, we need more sophisticated ways of encoding rich multimodal context, as well as supporting conversations aimed at collecting important information for the task at hand. In addition, we highlighted the need for multimodal models able to generate representations that are meaningful even in scenarios where some modalities are missing (i.e., text-only models), or when some of the modalities are noisy. This is an essential ingredient for developing systems that are resilient to adverse scenarios that are very common in the real world. As a result of this survey, we have identified several weaknesses primarily concerned with the datasets used to train such models. The development of sophisticated 3D engines has sparked research into embodied multimodal models able to solve tasks in simulated environments. This represents an important direction for future work with the aim of de46 \fVisually Grounded Language Learning: a review of language games, datasets, tasks, and models veloping more ecologically valid studies (De Vries et al., 2020) of how grounded meanings can be developed. Additionally, to adequately study language learning, we stressed the idea that we require datasets that require multi-agent communication skills. These are essential to account for recovery from possible breakdown and uncertainty during task solving. Moreover, it will facilitate agents to learn new tasks by learning from interaction (Gluck et al., 2018). 8. Acknowledgements We would like to thank Arash Eshghi and Raquel Fernandez for their feedback on a preliminary version of this manuscript which was part of the first author\u2019s PhD thesis (Suglia et al., 2022)."
+ },
+ {
+ "url": "http://arxiv.org/abs/2211.04534v1",
+ "title": "Going for GOAL: A Resource for Grounded Football Commentaries",
+ "abstract": "Recent video+language datasets cover domains where the interaction is highly\nstructured, such as instructional videos, or where the interaction is scripted,\nsuch as TV shows. Both of these properties can lead to spurious cues to be\nexploited by models rather than learning to ground language. In this paper, we\npresent GrOunded footbAlL commentaries (GOAL), a novel dataset of football (or\n`soccer') highlights videos with transcribed live commentaries in English. As\nthe course of a game is unpredictable, so are commentaries, which makes them a\nunique resource to investigate dynamic language grounding. We also provide\nstate-of-the-art baselines for the following tasks: frame reordering, moment\nretrieval, live commentary retrieval and play-by-play live commentary\ngeneration. Results show that SOTA models perform reasonably well in most\ntasks. We discuss the implications of these results and suggest new tasks for\nwhich GOAL can be used. Our codebase is available at:\nhttps://gitlab.com/grounded-sport-convai/goal-baselines.",
+ "authors": "Alessandro Suglia, Jos\u00e9 Lopes, Emanuele Bastianelli, Andrea Vanzo, Shubham Agarwal, Malvina Nikandrou, Lu Yu, Ioannis Konstas, Verena Rieser",
+ "published": "2022-11-08",
+ "updated": "2022-11-08",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.CL"
+ ],
+ "main_content": "INTRODUCTION Research on visual language grounding and visual dialogue combines recent advances in computer vision and natural language processing by grounding conversation in image [57] or video data [2]. Whereas open domain dialogue systems seek to embody their own \u2018persona\u2019 to be entertaining [56], and some systems use external knowledge to reason [67]. In this research, we propose the domain of sports as a test-bed which combines several elements of multimodal and grounded natural language processing. In particular, we collect a new dataset on football commentaries (see transcript in Figure 1) which contains speech, transcribed text, real background audio and video data. The task requires visual grounding as in video captioning generation (see transcript 2,3,4,5,6), integrating external knowledge sources, such as knowledge about standings (1), personal opinion (2,3,6), previous events/history (4), while being entertaining (6). So far, none of the current state-of-the-art models is able to address all these challenges at once. For example, the recent multi-modal Transformer model [61] only combines audio, video and captions. Therefore, GOAL represents a unique opportunity to develop and evaluate models with a larger number of input modalities. As such, the GOAL dataset aims to complement current efforts in providing large scale datasets for benchmarking knowledge and image grounded systems [58] and video-subtitle moment retrieval [35]. For example, Shuster et al. [58] propose the Dialogue Dodecathlon, which assembles existing datasets representing various sub-tasks, including asking and answering questions, responding to statements, as well as persona, knowledge, situation and image grounding. In contrast to the Dodecathlon benchmark, our dataset requires these skills to be demonstrated within a single task, which we believe makes it a more challenging setup. In addition, GOAL aims to probe existing pretrained models in a setup which is closer to the one envisaged by de Vries et al. [11] by creating a corpus from live football commentaries, where video and language are not scripted but instead are collected from a realistic setting involving complex visual scenes, and articulated (noisy) language commentaries. As such, the main contributions of this paper are as follows: 1) we release GOAL, a new dataset based on football commentaries that requires advances in several fields such as visual language grounding, data-to-text generation and multi-modal fusion; 2) we provide an evaluation framework based on frame reordering, commentary retrieval, commentary generation and, video moment retrieval; 3) we compare variants of the state-of-the-art video+language model presented in [37] on these sub-tasks and compare the results to uni-modal ablations. 2 BACKGROUND AND RELATED WORK Live sports events are broadcasted daily around the world. Broadcasts have journalists and commentators who provide descriptions of the events happening during the match, technical insights and facts about athletes taking part in the event. As commentators refer to the game they are watching, their language is intrinsically grounded on the events occurring as well as on general knowledge about teams and players involved in the game. In addition, commentators rely on game statistics to improve the viewer experience both from an entertaining and technical perpective. Bringing all these dimensions together to generate human-like football commentary unites several on-going research challenges from knowledge grounded data-totext generation, and several video+language tasks. Table 1 provides a comparison between previous resources in these sub-fields and GOAL\u2019s contributions to the community. 2.1 Data-to-text generation Data-to-text (D2T) generation takes structured data, such as RDF triples [15] or tabular data [47], and translates it to natural language. D2T has been applied to sports statistics [3, 26, 39, 45, 51, 64], by using tabular data to generate an end-of-match summary report. van der Lee et al. [62] propose a template-based football report generation method, which was further enhanced in [17] by creating a link to Wikidata. GOAL can be seen as an extension of these D2T generation tasks with the help of vision, language and audio features. Furthermore, we address live commentary, rather than end-of-match report generation, which has to take live events into account, similar to play-by-play commentary generation [29, 59] and alignment to game events [20]. Recent work [69], has contributed to this research by releasing a new dataset of play-by-play comments and events in more than 2,000 matches. However, in contrast to play-by-play commentary, which is added post-hoc and as such is written and clean, GOAL contains transcriptions of live commentary with all the properties of spontaneous speech, such as repairs or unfinished sentences. Live commentary generation for sports broadcast has been previously investigated by Chen and Mooney [5] but for simulated football matches only, where a structured representation of the visual scene is available. 2.2 Video+language datasets In contrast to D2T approaches, which \u2018ground\u2019 in external facts, current video+language datasets mostly focus on situational grounding. Initial approaches to video+language, such as Karpathy et al. [25], cast video understanding as a multi-class classification problem where labels are based on a class taxonomy for sports activities. A parallel line of research has focused on grounded instructions, such as cooking [52, 71], where datasets contain time aligned textual description of events for downstream task for caption generation [72] and retrieval [43]. One of the limitations of instructional videos is their highly structured nature, which allows models to learn task cues rather than ground language. In contrast, datasets such as ActivityNet [30], DiDeMo [21] and MSR-VTT [66] contain randomly selected content. For example, MSR-VTT contains videos ranging from sports to political speeches. Lastly, datasets created from TV series and movies, such as LSMDC [53], VIOLIN [41] or TVR [35], also span a wide variety of domains and situations, but are highly scripted due to the dialogical nature of these TV productions. In contrast, sports videos are not structured as instructional videos, nor are they scripted as TV shows or movies. The action is shot from different angles and often replays are used. Recently, football datasets have received increased interest in the vision community for action recognition [8, 12, 25]. The closest to the work presented here is Pasunuru and Bansal [49]. However, in contrast to GOAL, Pasunuru and Bansal do not aim to generate professional commentary, but written chit-chat between spectators watching games on 2 \fName Domain Situation Grounding KB Grounding Manual Annotations Spoken Spontaneous Language # Videos Avg. Duration (s) TACOS [52] Cooking \u2713 \u2713 EN 127 260 YouCook II [71] Cooking \u2713 \u2713 EN 2,000 326 HowTo100M [55] Instructional \u2713* \u2713 \u2713 EN/PT 79,114 90 MSR-VTT [66] Various \u2713* \u2713 EN 10,000 10-30 ActivityNet Captions [30] Various \u2713 \u2713 EN 20,000 120 DiDeMO [21] Various \u2713* \u2713 EN 10,464 5 ViTT [24] Various \u2713 \u2713 EN 8,169 7.1 (segments) TVR [35] TV shows \u2713* \u2713 \u2713 EN 21,793 76.2 VIOLIN [41] TV-shows \u2713 \u2713 \u2713 EN 95,322 90 Twitch-FIFA [49] Video-games \u2713* \u2713 EN 49 1.76 (hours) GOAL Football Commentary \u2713 \u2713 \u2713 \u2713 \u2713 EN 1,107 238 Table 1: GOAL vs existing video+language datasets, in terms of features, language and size. (*) means that not all the language is grounded in the visual scene or in any piece of external knowledge (e.g. chit-chat in TV-shows). Twitch.tv. The authors report that more than half of the chat is not related to the video and thus not useful to evaluate grounded models. 3 GOAL BENCHMARK 3.1 Dataset preparation GOAL contains 1,107 video highlights from football matches from major competitions in Europe between 2018 and 2020 available on YouTube. We selected the videos so that 1) they only contained highlights 2) the commentaries are in English1 and 3) they have a minimum duration of 80 seconds. Following other dataset creation initiatives (e.g., [33, 34, 71]), we will publish the data following fair use. Specifically, interested researchers, after signing an NDA, will have access to the source URLs, transcriptions produced by professional transcribers, information about players, teams, lineups and matches, extracted video features [37] and audio features extracted with OpenSmile using the IS2010 feature set [13]. 3.1.1 Transcriptions curation. We use professional transcription services 2 rather than Automatic Speech Recognition due to the specific sub-register used at sports events (e.g. specific terminology as well as player and team names) and due to the presence of background noise. In order to support the transcription process, we provided the transcribers with the knowledge base information for a specific match (e.g. line-ups, substitutions, ...). Transcriptions were provided as captions paired with the corresponding timestamps (i.e. start and end). For consistency, sections which were considered unintelligible by the professional transcribers were manually verified. The majority was resolved, leaving only a few still unintelligible, marked with the token [?]. Transcriptions are thus grouped by start and end timestamps identifying the time frame in which a specific caption chunk has been uttered. However, sentences referring to a specific event may be split in several continuous chunks, breaking the syntax and meaning 1We are planning to extend the dataset to other languages in a future release. 2We used GoTranscript for our data collection. of the whole sentence itself. For example, chunk 2 in Figure 1 may have been split into two different chunks due to the timestamp-based grouping, with the first chunk \u201cEmre Can with a wonderfully clever pass to Bernardeschi\u201d ending at time \ud835\udc61\ud835\udc56and the second one \u201cwho runs out of room and tries to pick out Ronaldo.\u201d starting at time \ud835\udc61\ud835\udc57 (with \ud835\udc61\ud835\udc57> \ud835\udc61\ud835\udc56). Similarly, chunks 5 and 6 clearly refer to the same event even if they have been split. In order to perform the tasks designed in this benchmark, complete and well-formed sentences had to be reconstructed, i.e. all chunks composing a sentence or referring to the same event had to be merged. To satisfy these requirements, we defined two alternative heuristics to refine and post-process the output transcriptions: (1) sentence-based \u2013 two consecutive chunks \ud835\udc60\ud835\udc61and \ud835\udc60\ud835\udc61+1 are merged into a single chunk whenever \ud835\udc60\ud835\udc61does not end with an end-of-sentence punctuation (e.g. full stop). This heuristics would allow to merge chunk 2 of Figure 1 if they were originally split, due to the first sentence not ending with a period. (2) event-based \u2013 two consecutive chunks \ud835\udc60\ud835\udc61and \ud835\udc60\ud835\udc61+1 are merged into a single chunk if the distance between the end timestamp of \ud835\udc60\ud835\udc61and the start timestamp of \ud835\udc60\ud835\udc61+1 is lower than a 2.5. This heuristics would merge chunks 5 and 6, if the difference between start timestamp of 6 and end timestamp of 5 is < 2.5. Using these post-processing heuristics, we obtain two different versions of the dataset; specific versions of the dataset are used by specific tasks in the GOAL benchmark depending on their temporal granularity. Please refer to Section 4 for details. 3.1.2 Dataset statistics. The resulting dataset has been divided into 3 splits using stratified sampling based on the league label. We completed a first 70-30 sampling obtaining train+val and test splits. Then, we completed a 80-20 sampling obtaining the train and val splits. These splits are the ones that we use as reference for all the tasks that are presented in the GOAL experimental evaluation. First of all, in Table 2 we report the statistics about the reference 3 \fTrain Validation Test # Scenes # Sentences # Tokens # Scenes # Sentences # Tokens # Scenes # Sentences # Tokens 6623 29295 10815 1728 7608 5700 3520 16123 8186 Table 2: GOAL dataset statistics reported for the training, validation, and test splits generated using stratified sampling based on the football league of each video. splits that are used in all the proposed tasks. In order to demonstrate the complexity of the language contained in GOAL, we used the spaCy toolkit 3, to calculate the vocabulary size without named entities. To do that, we extracted all the sentences in the dataset and for each of them, we extracted all the unique tokens associated with named entities. We removed them from our vocabulary giving us a vocabulary of size 6819 tokens for the training set. This gives us an idea that a delexicalisation process will not overly simplify the language complexity of the dataset, classifying the GOAL dataset as challenging dataset for the Vision and Language community. The 1107 videos are extracted from 8 different leagues such as: the Italian Serie A (638 videos), the English Premier League (167 videos), UEFA Champions League (62 videos), UEFA Europa League (61 videos), FA Cup (34 videos), Carabao Cup (18 videos), EFL Championship (77 videos) and EURO 2020 Qualifiers (50 videos). 3.1.3 Knowledge base integration. Football commentators often resort to their knowledge of the football scene, as well as to additional external sources about players, line-ups, teams and so on. We construct a knowledge base to surrogate such factual information, so that it can be exploited to extend the capability of a model. We extract all the information about line-ups, play-by-play commentary, summaries of game events, teams, as well as personal data about players (e.g. age, position, nicknames), from www.flashscore.com for each of the 1,107 matches. In addition, we link teams and players to the corresponding Wikidata entity [63] by retrieving their Wikidata identifier, to augment the knowledge base with semantic information following Linked Open Data principles [4]. 3.2 Tasks Description To demonstrate the utility of the dataset, we have conducted an evaluation on four tasks that involve both grounded natural language understanding and generation skills following [37]: 1) commentary retrieval, 2) frame reordering, 3) video moment retrieval, and 4) commentary generation. Notation: Every video \ud835\udc63in the set of videos V has been divided into \ud835\udc3echunks using the heuristics described above. Every chunk \ud835\udc63\ud835\udc60,\ud835\udc52\u2208\ud835\udc63is identified by start and end timestamps (\ud835\udc60and \ud835\udc52, respectively), and paired to the caption \ud835\udc50\ud835\udc60,\ud835\udc52of the corresponding live commentary between timestamp \ud835\udc60and \ud835\udc52. Commentary retrieval: We formulate this task akin to response retrieval tasks as used for image-based [10, 44, 57] as well as videobased dialogues [2]. We use the event-based version of our dataset for this task. In particular, given a chunk of the video \ud835\udc63\ud835\udc60,\ud835\udc52as well as previous history \ud835\udc3b\ud835\udc63\ud835\udc60,\ud835\udc52= {\ud835\udc63\u2217,\ud835\udc5d: \ud835\udc63\u2217,\ud835\udc5d\u2208\ud835\udc63\u2227\ud835\udc5d\u2264\ud835\udc60}, a model has to select the most appropriate candidate response \u02dc \ud835\udc5famong a set of \ud835\udc5bcandidates R\ud835\udc63\ud835\udc60,\ud835\udc52. In this setup, we assume that R\ud835\udc63\ud835\udc60,\ud835\udc52is composed 3https://spacy.io/ of a ground-truth (GT) response \u02c6 \ud835\udc5fand \ud835\udc5b\u22121 negative candidates. The GT corresponds to the caption of the video chunk \ud835\udc50\ud835\udc60,\ud835\udc52. The negative candidates are sampled uniformly at random from the entire dataset. The average sentence similarity between the ground truth and the negative candidates was 0.35 in the training set4. Future work will explore more sophisticated techniques for adversarial candidate generation [32, 68]. To evaluate the performance of models in this task, we use retrieval-based metrics similar to [2, 10, 44, 57]: 1) R@1: GT response is in first position of the ranked list (higher is better); 2) Mean Reciprocal Rank (MRR): average of the reciprocal position of the GT response (higher is better); 3) Mean rank (mean): average position of the GT response (lower is better). For this evaluation, we limit the number of candidates \ud835\udc5bto 5. Frame reordering: Similar to the pretraining task in HERO [37], we define a frame reordering task for the event-based version of the dataset. This task serves to understand whether the model is able to learn the event dynamics of the video. This is especially relevant since in general, football videos have a faster pace than cooking videos or TV shows. We divide a given video chunk \ud835\udc63\ud835\udc60,\ud835\udc52 into \ud835\udc58sequential video features V\ud835\udc60,\ud835\udc52\u2208R\ud835\udc58\u00d7\u210e. We shuffle 15% of them and ask the model to predict the correct frame indexes. We formulate this as a classification task where the model has to predict, for every visual feature, a label in the set {1, 2, . . . , \ud835\udc40} where \ud835\udc40is the maximum number of visual features that can belong to a given video, in our case 100. Therefore, the corresponding label setting ranges from 0 to 99. We evaluate model performance based on categorical accuracy. Moment retrieval: This task follows the Video-Subtitle Matching pretraining task presented in [37] and uses the sentence-based version of the dataset. Given 1) \ud835\udc5avideo features V \u2208R\ud835\udc5a\u00d7\u210e\ud835\udc63(where \u210e\ud835\udc63is their dimensionality), and 2) a caption \ud835\udc50\ud835\udc60,\ud835\udc52, the model has to predict the start and end indexes (\u02dc \ud835\udc60, \u02dc \ud835\udc52) of \ud835\udc50\ud835\udc60,\ud835\udc525. It is worth noting that \u02dc \ud835\udc60and \u02dc \ud835\udc52are predicted independently; this may give rise to invalid combinations (i.e. \u02dc \ud835\udc60> \u02dc \ud835\udc52). To deal with invalid spans, at inference time we select the span with the maximum likelihood among the valid spans only (i.e. \ud835\udc60< \ud835\udc52) . We evaluate the performance of the models based on two metrics: 1) Soft: average number of spans that have a soft match, i.e. whenever there is a partial overlap between the predicted span (\u02dc \ud835\udc60, \u02dc \ud835\udc52) and the gold ones (\ud835\udc60,\ud835\udc52); 2) Weighted: similar to Intersection over Union which evaluates the proportion of matching predicted indexes compared to the gold ones. Commentary generation: The intuition for this task is the following: imagine watching a short video snippet from timestamp \ud835\udc60to 4Sentence similarity computed using the stsb-roberta-large model available in https://github.com/UKPLab/sentence-transformers. We use the cosine similarity to compare the sentence embeddings. 5We automatically derive the alignment between visual features and captions based on timing overlap. 4 \f\ud835\udc52, pausing and then generating a description for that chunk. Note that a given caption for a chunk \ud835\udc50\ud835\udc60,\ud835\udc52can contain several \ud835\udc3fsentences {\ud835\udc501 \ud835\udc60,\ud835\udc52,\ud835\udc502 \ud835\udc60,\ud835\udc52, . . . ,\ud835\udc50\ud835\udc3f \ud835\udc60,\ud835\udc52}. The model has to generate one sentence at a time conditioned on both the video and on what has been generated so far (i.e. the commentary history). For this task we use the sentence-based version of GOAL and report standard NLG evaluation metrics, including BERTscore [70], BLEU@4 [48], METEOR [31] and ROUGE-L [40]. We also perform a detailed error analysis to cope with the fact that current NLG metrics are not well correlated with human judgements [46]. 4 EXPERIMENTAL EVALUATION This section reports the results for each of the tasks outlined in Section 3.2, where we compare a fine-tuned HERO model [37]6 to uni-modal baselines. We decided to use HERO because it\u2019s a largescale Video+Language (V+L) model considered the baseline model for VALUE, another recent V+L benchmark [38]. This comparison aims to assess the extent to which fusing the two modalities is required to complete the task [60]. We used the GOAL validation set for hyperparameter tuning and the test set as a benchmark for the evaluation reported in the paper. Please refer to Appendix for details about the selected hyperparameters. We are planning to release the codebase that enables users to reproduce our experiments. 4.1 Commentary retrieval In this task, we assessed the ability of the model to select the best response among a set of candidates, similar to the Visual Dialog (VisDial) task specified in Das et al. [10]. We are interested in the following questions: 1) are all modalities useful to complete the task? 2) is incorporating and using the commentary history important to do well? The first question is motivated by the fact that datasets like VisDial and VQA often can be \u2018solved\u2019 by exploiting language artifacts. The latter is motivated by Agarwal et al. [1]\u2019s findings that the VisDial only requires minimal history context. To answer the first question, we compared the HERO model with two uni-modal baselines: 1) Vision-only HERO: a HERO model that is only using the current video to select the candidate; 2) RoBERTa: a RoBERTa model fine-tuned on the GOAL dataset using a multiple choice objective [42]. In addition, to assess the importance of encoding the history, we evaluated HERO with different number of textual tokens as part of the history (h) as well as different number of captions (c) (where \u210erepresents an upper bound on the number of tokens included). Results in Table 3 show that a strong text-only baseline like RoBERTa performs marginally above chance \u2013 indicating that video is required to correctly select the next response. The HERO (no history) model, which encodes visual features and the candidate response using the HERO Cross-modal Transformer, shows an increase of +19.4% in R@1, which confirms that vision is useful. However, this model does not take into account the history. By experimenting with different levels of complexity for history encoding, we observe that encoding just the last 50 tokens (\u210e= 50) of the previous caption (\ud835\udc50= 1) gives a boost in performance of +1.6% over the no history 6Due to computational constraints, we were able to run the HERO-flat version of the model only. R@1 MRR MEAN RoBERTa (h=50, c=1) 36.3% 0.593 2.320 HERO (no history) 55.7% 0.737 1.746 HERO (h=50, c=1) 57.3% 0.748 1.715 HERO (h=50, c=2) 58% 0.755 1.673 HERO (h=80, c=2) 59.9% 0.765 1.648 HERO (h=100, c=2) 63% 0.79 1.58 Table 3: Results of the response retrieval evaluation on the event-based GOAL dataset. We compare HERO with uni-modal baselines to understand the utility of using both modalities for the task. We also experiment with different numbers of history tokens (h) and different numbers of captions (c) that compose the history. model. By further increasing the history all retrieval metrics continue improving. In sum, our results show that models trained on GOAL require both multi-modal fusion and history, which makes this dataset better suited to study grounded language than datasets such as VisDial or VQA. 4.2 Frame reordering We intend frame reordering as a diagnostic task to assess whether multi-modal models retain the ability to infer the order of the encoded visual inputs. We compare HERO with several baselines using the same pretrained visual features. In particular, we report results for four baseline sequence encoders that use an MLP with two hidden layers to complete the classification task: 1) MLP: receives in input the visual features as input without positional information; 2) MLP+pos: extends the previous model with positional embeddings; 3) bi-GRU: a one-layer bidirectional GRU [6] generating contextual representation for the visual features; 4) bi-LSTM: same as before but using LSTM cells [22]. Results are shown in Table 4. We see that a model that does not use positional information (MLP) barely goes above 10% indicating that the task indeed requires positional information. When using expressive models that innately have a bias for sequential modelling such as LSTMs or GRUs, accuracy is almost perfect. However, HERO seems to sacrifice its ability to recover the original frame order to better encode other elements of its input. This result indicates the need for a more systematic way of incorporating multi-modal information in single-stream transformer models as well as better ways of dealing with positional information across multiple modalities. 4.3 Video moment retrieval For video moment retrieval, we compare the HERO model with a late fusion baseline which we call VRoBERTa. VRoBERTa concatenates visual embeddings (the same used in HERO) with the [CLS] hidden state representation generated by RoBERTa for the current caption. The resulting representation is passed through a feed-forward neural network to predict the likelihood of that frame being the start or end index. We keep the layout of the feed-forward network consistent between the two models for fair comparison. The results in Table 5 show that cross-modal attention learned by HERO boosts performance by +38.36 and +0.49 for SOFT and 5 \fACCURACY MLP 10.42% MLP+pos 10.99% bi-GRU 90% bi-LSTM 91% HERO 87% Table 4: Results for the frame reordering task for the GOAL dataset. For every instance, 15% of the video frame features are shuffled. SOFT WEIGHTED HERO 70.6% 2.6% VRoBERTa 32.24% 2.11% Table 5: Results from the moment retrieval task for the GOAL dataset. We compare the HERO model with a strong vision+language baseline VRoBERTa. BERT BLEU METEOR ROUGE-L CopyNet 0.848 0.41% 4.30% 9.05% BART 0.848 0.79% 3.80% 8.79% BART-kb 0.848 0.84% 3.92% 8.78% BART-kb-t* 0.874 2.28% 8.21% 21.2% RoBERTa 0.845 0.36% 2.94% 5.78% HERO-t 0.847 0.53% 3.55% 6.69% HERO-nt 0.847 0.60% 3.7% 7.2% Table 6: Results for commentary generation for the GOAL test set. We compare text-only decoders with different ablations of the HERO model. BART-kb-t* is an upper bound oracle, since we used the target output to generate the KB triples. WEIGHTED scores, respectively. An example-based analysis shows that predicted spans are fairly wide and therefore exact match scores are low. A possible explanation is the noise introduced by the automatic video features-subtitle alignment (cf. Appendix). Nevertheless, HERO achieves reasonable performance; we believe this is associated with video-subtitle matching being a pretraining task for HERO [37]. 4.4 Commentary generation 4.4.1 Automatic Evaluation. We train several text-only and visionbased models as commentary generation baselines. As text-only baselines we report CopyNet [19], a sequence-to-sequence model with a copy mechanism, BART [36], an auto-regressive encoder-decoder model and RoBERTa, an encoder-decoder architecture initialised with RoBERTa\u2019s weights for the encoder and decoder components following [54]. We follow the same approach for our vision-based encoder-decoder architecture based on HERO. We experimented with two variants to investigate whether sharing cross-attention layers of the pretrained HERO encoder benefits the decoder training: 1) HERO-t: we tie the weights of the encoder and decoder allowing for complete parameter sharing; 2) HERO-nt: we only tie the word embedding layers of the HERO encoder and decoder, and fine-tune the decoder on the sentence-based version of the GOAL dataset. To assess the usefulness of the KB during commentary generation, as showed in Table 7, we tested a BART variant where named entities are extracted from textual information in the commentary and used to retrieve information from the KB. This was done both from history (BART-kb) and from the target sentence to set an upper bound oracle (BART-kb-t*). The retrieved KB information is represented in natural language and concatenated to the history. For example, if \u2018team B\u2019 is identified in the context, the KB information is encoded as e.g. \u2018team B is the away team\u2019. Similarly for a \u2018player A\u2019, we get \u2018Player A is a team B striker\u2019. We expect that by providing player positions to the model it will learn game related knowledge, such as a striker is more likely to score goals than a goalkeeper. We also experiment with the position in which the KB information is integrated. From our analysis, it seems that the model is always able to refer to the KB information no matter where it is located in the input. All models use a history of 5 previous sentences. We generate sequences token-by-token using nucleus sampling [23] (\ud835\udc5d= 0.9 for CopyNet, \ud835\udc5d= 0.95 for the others) and top-k sampling [14] (\ud835\udc58= 10 for CopyNet, \ud835\udc58= 100 for the others). We truncate the generated output to 25 tokens. The results in Table 6 show that all models perform relatively poorly. Even though CopyNet scores high on recall-based metrics METEOR and ROUGE, the lower BLEU score indicates less coherent text compared to the pretrained baselines. Among those, BART outperforms the other text-only baseline RoBERTa by 0.43 BLEU score, even though they are pretrained on the same data. When KB information is added, BLEU improves only by 0.05. Despite HERO and RoBERTa sharing the same encoder-only architecture, HERO\u2019s visual component enables better representation learning with an gain of +0.24 BLEU score. Finally, sharing all encoder-decoder layers (HERO-t) is not beneficial for this task. A more in-depth analysis across datasets is required to find effective strategies for multi-modal encoder-decoder architectures. 4.4.2 Error analysis. We further investigate generation performance in an error analysis since automatic metrics do not fully capture model performance for language generation tasks, e.g. [27, 46]. We manually analysed 100 randomly selected generated outputs for the best performing models HERO and BART, using 3 labels to indicate performance levels: plausible caption, incoherent text and repetition of previous text.7 For text-based models, our judgement was based only on the textual context given to the model (i.e. history). This means that a commentary suggesting \u201cthe ball hit the crossbar\u201d is labelled as plausible given the goal chance context, even if the video was showing a goal. For the multi-modal model, we considered both textual and visual contexts. Results in Table 8 show that HERO avoids repetition and produces mostly plausible outputs. Indicating that HERO\u2019s visual features provide some innate bias towards more plausible responses influenced by representations that can capture high-level features of events (e.g. goals, penalty, etc.). This underlines our previous point that GOAL requires both language and visual grounding in contrast to other Vision+Language tasks. In particular, responses that are considered 7Some generated examples can be found in Appendix. 6 \fCommentary generation with KB priming Model KB-source PiC BERTScore \u2191 BLEU \u2191 METEOR \u2191 ROUGE-L \u2191 BART history first 0.85 1.06 4.04 8.89 BART target first 0.85 1.25 4.34 9.82 BART history last 0.85 1.03 4.14 8.97 BART target last 0.87 2.56 8.32 21.2 Table 7: Hyper-parameter tuning for BART-based models extended with the GOAL knowledge base. We report different configurations depending on the reference text used to extract the information from the GOAL KB (KB-source), and the position in the input text where the KB information is added (PiC); this can be either before the history (first) or after (last). Intel vs Hellas Verona History He's pointing to the head, Antonio Conte. He knows exactly where he's got his team back into this game. It's been coming. They've been knocking on the door for so long. It's a relatively simple straightforward ball into box for Vecino to really attack with pace and purpose and power. Ground truth It is exactly that. BART He was just pruning off the edge of the box. HERO Good finish from Hakimi. Figure 2: Example responses generated by the trained models for the commentary generation task. HERO uses both the video frames features and the history information to generate the response while BART only uses the textual history generated so far. This example is taken from a goal replay event. Language Language+Video Plausible Repetition Incoherent Plausible Repetition Incoherent BART 71% 9% 20% N/A N/A N/A HERO 77% 0 23% 43% 0 57% Table 8: Detailed results of the manual error analysis conducted on the output of the commentary generation models. The results are based on a sample of 100 contexts for which we manually analysed HERO and BART response predictions based on the defined annotation scheme. We evaluate the generated commentary based on the ground-truth language or both language and video contexts. plausible from the text-only perspective, immediately become incoherent when observing the video. However, HERO\u2019s visual features are not fine-grained enough to capture details of the events, leading to a higher percentage of incoherent responses. While the current improvement from the KB is small (BART-kb), we hypothesise that it could be more useful with entity extraction from vision (rather than the text history), e.g. by tracking the player who has the ball, using OCR to get their jersey number and linking this information to the KB. This hypothesis was tested with a simple experiment by extracting entities from the target sentence (BART-kb-t*) boosting the performance of the model. In future work, we aim to 1) improve the KB representation during the generation process 2) share global match information with the highlight videos (e.g. play-by-play commentary) and 3) use the audio features to help identify certain events (e.g. goals from crowd cheering). 5 SITUATED LANGUAGE ANALYSIS Language in football commentaries is notoriously event-driven. Commentators describe how the game unfolds and how players interact, making football commentaries an interesting benchmark for learning grounded verb semantics from video. To understand how linguistic predicates relate to visual events in the dataset, we use the output of the state of the art semantic role labeller InVeRo 8 [9] as silver annotations (which has high accuracy in both predicate and argument prediction of 86.1% and 84%, respectively). The objective of this annotation step is to have a sense of how \u201csituated\" the collected data is. Specifically, we are interested in how predicative structures of event verbs describe actual situations happening in videos, providing evidence of the correspondence between the commentary and visual streams (e.g., \u201cplayer A passes to player B\u201d), and posing a solid base for language grounding. For a more detailed analysis, please refer to the Appendix. 8http://nlp.uniroma1.it/invero/ 7 \fFigure 3: Distribution of the most common English Propbank predicates extracted from GOAL, after removing predicates with modal and auxiliary function. Thanks to this automatic procedure, we extracted 68559 predicative structures from GOAL commentaries, ignoring sentences without verbs. As we can see from Figure 3, common verbs like do and get very often get the senses do.02 and get.05, which represent physical actions and movements, relating to events happening in the videos. Verbs with high relevance to football jargon appear often in their \u201csportive\" sense, such as score.01 and strike.01, and also more ambiguous verbs assume a more action-related sense, e.g., miss, whose sense miss.02 describes a no hit action. Other verbs relating to the progress and the status of the match are also often used, such as play, win, lead, and beat. Such verbs, appear in their sport declination, i.e., play.01, win.01, lead.01, and beat.03, respectively. All these occurrences of verb senses show how the language used in the commentaries actually refers to events being displayed in the videos, thus providing evidence of the situated nature of the dataset. An interesting case is the one of the verb clear. This verb is often used in commentaries with the sense of \u201cclearing the ball/area\", e.g., Santander with a header to clear it to safety. This specific sense is rarely covered in general textual pre-training datasets, hence the failure of InVeRo, which relies on BERT, to assign the right sense. The context from the video is, in this case, fundamental to learn a refined representation for such verbs. Studying how to integrate richer context in language models is an interesting avenue for multi-modal representation learning. 6 CONCLUSIONS We present GOAL, a new dataset that enables the study of knowledge and visual grounding in the sports domain. We show that, in contrast to existing vision+language datasets, models trained on GOAL require grounding in both language and vision. We also discuss how external information can further improve performance. We provide results for four vision+language tasks, including commentary retrieval, frame reordering, video moment retrieval, and commentary generation. Our findings on frame reordering and video moment retrieval diagnostics indicate that current pretrained transformer-based models have limited temporal reasoning capabilities. We consider commentary generation the most challenging task as it requires video understanding on multiple time scales. Our error analysis reveals that the text generated by the examined models matches the style of football commentaries, but it fails to accurately describe the depicted events. We show that generation can be supported by utilising a knowledge base. Designing a multi-modal model able to fuse and integrate all input sources represents an exciting avenue for future research. GOAL can also serve as a useful resource for other multi-modal tasks, including: visual context-aware speech recognition e.g. [18] (GOAL has high quality transcriptions); multi-modal fact-checking (i.e. in a goal event, check if the generated commentary mentions the correct goal scorer); and multi-modal activity recognition. Additionally, GOAL represents an interesting benchmark for models that do not require textual annotations but directly solve the task using audio information. Therefore, we consider integrating the audio features provided in GOAL an interesting direction for multi-modal machine learning community. 8"
+ },
+ {
+ "url": "http://arxiv.org/abs/2108.04927v2",
+ "title": "Embodied BERT: A Transformer Model for Embodied, Language-guided Visual Task Completion",
+ "abstract": "Language-guided robots performing home and office tasks must navigate in and\ninteract with the world. Grounding language instructions against visual\nobservations and actions to take in an environment is an open challenge. We\npresent Embodied BERT (EmBERT), a transformer-based model which can attend to\nhigh-dimensional, multi-modal inputs across long temporal horizons for\nlanguage-conditioned task completion. Additionally, we bridge the gap between\nsuccessful object-centric navigation models used for non-interactive agents and\nthe language-guided visual task completion benchmark, ALFRED, by introducing\nobject navigation targets for EmBERT training. We achieve competitive\nperformance on the ALFRED benchmark, and EmBERT marks the first\ntransformer-based model to successfully handle the long-horizon, dense,\nmulti-modal histories of ALFRED, and the first ALFRED model to utilize\nobject-centric navigation targets.",
+ "authors": "Alessandro Suglia, Qiaozi Gao, Jesse Thomason, Govind Thattai, Gaurav Sukhatme",
+ "published": "2021-08-10",
+ "updated": "2021-11-04",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.AI",
+ "cs.CL",
+ "cs.LG"
+ ],
+ "main_content": "Introduction Language is grounded in agent experience based on interactions with the world (Bisk et al., 2020; Bender and Koller, 2020). Task-oriented, instructional language focuses on objects and interactions between objects and actors, as seen in instructional datasets (Damen et al., 2020; Koupaee and Wang, 2018), as a function of the inextricable relationship between language and objects (Quine, 1960). That focus yields language descriptions of object targets for manipulation such as put the strawberries on the cutting board and slice them into pieces (Chai et al., 2018). We demonstrate that predicting navigational object landmarks in addition to manipulation object targets improves the performance of an instruction following agent in a rich, 3D simulated home environment. We posit that object-centric \u2217Work completed via internship with Amazon Alexa AI. navigation is a key piece of semantic and topological navigation (Kuipers and Byun, 1991) for Embodied AI (EAI) agents generally. Substantial modeling (Majumdar et al., 2020) and benchmark (Qi et al., 2020b) efforts in EAI navigation focus on identifying object landmarks (Blukis et al., 2018) and destinations (Batra et al., 2020b). However, for agent task completion, where agents must navigate an environment and manipulate objects towards a speci\ufb01ed goal (Gordon et al., 2017; Shridhar et al., 2020), most predict movement actions without explicitly identifying navigation object targets (Singh et al., 2020; Pashevich et al., 2021; Nguyen et al., 2021; Abramson et al., 2020). We address this gap, grounding navigation instructions like Head to the sink in the corner by predicting the spatial locations of the goal sink object at each timestep (Figure 1). Transformer-based models in EAI score the alignment between a language instruction and an already-completed path (Majumdar et al., 2020) or introduce recurrence by propagating part of the hidden state to the next timestep (Hong et al., 2020). The former requires beam search over sequences of environment actions, which is not feasible when actions cannot be undone, such as slicing an apple. The latter introduces a heavy memory requirement, and is feasible only with short trajectories of four to six steps. We overcome both limitations by decoupling the embedding of language and visual features from the prediction of what action to take next in the environment. We \ufb01rst embed language and visual observations at single timesteps using a multi-modal transformer architecture, then train a transformer decoder model to consume sequences of such embeddings to decode actions (Figure 3). We introduce Embodied BERT (EmBERT), which implements these two key insights: 1. Object-centric Navigation uni\ufb01es the disjoint navigation and interaction action searXiv:2108.04927v2 [cs.CV] 4 Nov 2021 \fFront Left Back Right t=0 Turn Left Action Object BBox Parent BBox Goal Inst. Step Inst. Put a cooked potato in the sink. Head to the sink in the corner. EmBERT Transformer Embedding Transformer Decoder Move Forward t=1 Put a cooked potato in the sink. Head to the sink in the corner. Transformer Embedding Transformer Decoder t=6 Put a cooked potato in the sink. Pick up the potato in front of you. Transformer Embedding Transformer Decoder Pickup ... ... ... ... Figure 1: Embodied BERT. EmBERT attends to object detections in a panoramic view around an agent, then predicts an action and both a target object and target object parent for both navigation and manipulation actions. For example, at timesteps t = 0, 1 above, the model must predict the sink object target and its parent, the countertop, while at t = 6 it predicts both the object potato to pick up and the sink on which it rests. quences in ALFRED, giving navigation actions per-step object landmarks. 2. Decoupled Multimodal Transformers enable extending transformer based multimodal embeddings and sequence-to-sequence prediction to the \ufb01fty average steps present in ALFRED trajectories. 2 Related Work Natural language guidance of robots (Tellex et al., 2020) has been explored in contexts from furniture assembly (Tellex et al., 2011) to quadcoptor \ufb02ight control (Blukis et al., 2019). Embodied AI. For task completion benchmarks, actions like pickup must be coupled with object targets in the visual world, with speci\ufb01cation ranging from mask prediction only (Shridhar et al., 2020) to proposals for full low level gripper control (Batra et al., 2020a). Similarly, navigation benchmarks incorporate objects as targets in tasks like object navigation (Qi et al., 2020b; Batra et al., 2020b; Kurenkov et al., 2020), and explicitly modeling those objects assists generally at navigation success (Shrivastava et al., 2021; Qi et al., 2020a, 2021). Many successful modeling approaches for navigation benchmarks incorporate multimodal transformer models that require large memory from recurrence (Hong et al., 2020), beam search over potential action sequences (Majumdar et al., 2020), or shallow layers without largescale pretraining to encode long histories (Pashevich et al., 2021; Magassouba et al., 2021). In this work, we incorporate navigation object targets into the ALFRED task completion benchmark (Shridhar et al., 2020), and decouple transformer-based multimodal state embedding from transformer-based translation of state embeddings to action and object target predictions. In addition, differently from other approaches that train from scratch their language encoder, we successfully exploit the BERT stack in our multi-modal architecture. In this way, EmBERT can be applied to other language-guided tasks such as VLN and Cooperative Vision-andDialog Navigation (Thomason et al., 2019). Language-Guided Task Completion. Table 1 summarizes how EmBERT compares to current ALFRED modeling approaches. ALFRED language instructions are given as both a single high level goal and a sequence of step-by-step instructions (Figure 2). At each timestep, we encode the goal instruction and a predicted current step-bystep instruction. We train EmBERT to predict when to advance to the next instruction, a technique introduced by LWIT (Nguyen et al., 2021). EmBERT uses a panoramic view space to see all around the agent. Rather than processing dense, single vector representations (Shridhar et al., 2020; Singh et al., 2020; Pashevich et al., 2021; Kim et al., 2021; Blukis et al., 2021), EmBERT attends directly over object bounding box predictions embedded with their spatial relations to the agent, inspired by LWIT (Nguyen et al., 2021) and a recurrent VLN BERT model (Hong et al., 2020). We similarly follow prior work (Singh et al., 2020; Pashevich et al., 2021; Nguyen et al., 2021; Kim et al., 2021; Zhang and Chai, 2021) in predicting these bounding boxes as object targets for actions like Pickup, rather than directly predicting a dense object segmentation mask (Shridhar et al., 2020). Consider the step heat the mug of water in the microwave, where the visual observation before \fLanguage Obs. Visual Obs. Historical Obs. Inference Goal Inst. Inst. Views Features As Hidden Mask Nav Obj. Structure Split Inputs States Pred. Pred. SEQ2SEQ (Shridhar et al., 2020) \u0017 \u0017 Single Dense \u0017 LSTM Direct \u0017 MOCA (Singh et al., 2020) \u0013 \u0017 Single Dense \u0017 LSTM BBox \u0017 ET (Pashevich et al., 2021) \u0017 \u0017 Single Dense TF \u0017 BBox \u0017 LWIT (Nguyen et al., 2021) \u0013 \u0013 Multi BBox \u0017 LSTM BBox \u0017 ABP (Kim et al., 2021) \u0013 \u0017 Multi Dense \u0017 LSTM BBox \u0017 HITUT (Zhang and Chai, 2021) \u0013 \u0013 Single BBox SG \u0017 BBox \u0017 HLSM (Blukis et al., 2021) \u0013 Single Dense SG+Map \u0017 Direct \u0017 EmBERT \u0013 \u0013 Multi BBox \u0017 TF BBox \u0013 Table 1: Model comparison. EmBERT uses a multimodal transformer (TF) to embed language instructions and detected objects in a panoramic view, and a transformer decoder to produce action and object predictions. Ours is the \ufb01rst ALFRED model to add object prediction to navigation steps. Other methods maintain history by taking previous transformer states (TF) (Pashevich et al., 2021), subgoal prediction structures (SG) (Zhang and Chai, 2021; Blukis et al., 2021), or maintained voxel maps (Blukis et al., 2021) as input. turning the microwave on and after turning the microwave off are identical. Transformer encodings of ALFRED\u2019s large observation history are possible only with shallow networks (Pashevich et al., 2021) that cannot take advantage of large scale, pretrained language models used on shorter horizons (Hong et al., 2020). We decouple multimodal transformer state encoding from sequence to sequence state to action prediction, drawing inspiration from the AllenNLP SQuAD (Rajpurkar et al., 2016) training procedure (Gardner et al., 2017). Our EmBERT model is the \ufb01rst to utilize an auxiliary, object-centric navigation prediction loss during joint navigation and manipulation tasks, building on prior work that predicted only the direction of the target object (Storks et al., 2021) or honed in on landmarks during navigation-only tasks (Shrivastava et al., 2021). While mapping environments during inference has shown promise on both VLN (Fang et al., 2019; Chen et al., 2021) and ALFRED (Blukis et al., 2021), we leave the incorporation of mapping to future work. 3 The ALFRED Benchmark The ALFRED benchmark (Shridhar et al., 2020) pairs household task demonstrations with written English instructions in 3d simulated rooms (Kolve et al., 2017). ALFRED tasks are from seven categories: PICK & PLACE, STACK & PLACE, PICK TWO & PLACE, CLEAN & PLACE, HEAT & PLACE, COOL & PLACE, and EXAMINE IN LIGHT. Each task involves one or more objects that need to be manipulated, for example an apple, and a \ufb01nal receptacle on which they should come to rest, for example a plate. Many tasks involve intermediate state changes, for example HEAT & PLACE requires cooking the target object in a microwave. Supervision Data. Each ALFRED episode comprises an initial state for a simulated room, language instructions, planning goals, and an expert demonstration trajectory. The language instructions are given as a high-level goal instruction Ig, for example Put a cooked egg in the sink, together with a sequence of step-by-step instructions \u20d7 I, for example Turn right and go to the sink, Pick up the egg on the counter to the right of the sink, . . . The planning goals P (or subgoals) are tuples of goals and arguments, such as (SliceObject, Apple) that unpack to lowlevel sequences of actions like picking up a knife, performing a slice action on an apple, and putting the knife down on a countertop. The expert demonstration trajectory T is a sequence of action and object mask pairs, where Tj = (aj, Mj). Each step-by-step instruction Ii corresponds to a sub-sequence of the expert demonstration, Tj:k given by alignment lookup ma(i) = (j, k) and to a planning goal Pb by alignment lookup mp(i) = b. For example, in Figure 2, instruction I0 corresponds to a GotoLocation navigation goal, as well as a sequence of turning and movement API actions that a model must predict. Model Observations. At the beginning of each episode in timestep t = 0, an ALFRED agent receives the high-level and step-by-step language instructions Ig, \u20d7 I. At every timestep t, the agent receives a 2d, RGB visual observation representing the front-facing agent camera view, VF. ALFRED models produce an action at from among 5 navigation (e.g., Turn Left, Move Forward, Look Up) and 7 manipulation actions (e.g., Pickup, ToggleOn, Slice), as well as an object mask \fTurn Right (GotoLocation, box) Turn Right, ... \u2205, ... (PickupObject, Pencil2) Look Down, Pickup, Look Up \u2205,.....,\u2205 (GotoLocation, lamp) Turn Right, ... \u2205, ... (ToggleObject, Lamp1) Look Up, Toggle On \u2205, Views and BBoxes: Front, Left, Back, Right (GotoLocation, L4) Turn Right, ... ( , ), ... Figure 2: EmBERT Auxiliary Predictions. ALFRED provides goal and step-by-step language instructions that are aligned with planner goals and sequences of trajectory actions in an expert demonstration (top). EmBERT additionally identi\ufb01es navigational object targets in a panoramic view (bottom). EmBERT predicts an object target and its higher visibility parent receptacle, such as the table on which the box rests. Mt. Predicted action at and mask Mt are executed in the ALFRED environment to yield the next visual observation. For navigation actions, prediction Mt is ignored, and there is no training supervision for objects associated with navigation actions. EmBERT Predictions. EmBERT gathers additional visual data (Figure 2). After every navigation action, we turn the agent in place to obtain left, backwards, and right visual frames VL, VB, VR. Following prior work (Singh et al., 2020), we run a pretrained Mask-RCNN (He et al., 2017) model to extract bounding boxes from our visual observations at each view. We train EmBERT to select the bounding box which has the highest intersectionover-union with Mt (more details in Section 4). We de\ufb01ne a navigation object target for navigation actions. For navigation actions taken during language instruction Ii, we examine the frame VF k at time k for Tk; ma(i) = (j, k). We identify the object instance O of the class speci\ufb01ed in the planning goal Pmb(i) in VF k. We de\ufb01ne this object O as the navigation object target for all navigation actions in Tj:k by pairing those actions with object mask MO to be predicted during training. We also add a training objective to predict the parent receptacle P(O) of O. Parent prediction enables navigating to landmarks such as the table for instructions like Turn around and head to the box on the table, where the box compared to the table on which it rests (Figure 2). 4 Embodied BERT EmBERT uses a transformer encoder for jointly embedding language and visual tokens and an transformer decoder for long-horizon planning and object-centric navigation predictions (Figure 3). 4.1 Multimodal encoder We use OSCAR (Li et al., 2020) as a backbone transformer module to fuse language and visual features at each ALFRED trajectory step. We obtain subword tokens for the goal instruction Ig = {g1, g2, . . . , gn} and the step-by-step instruction Ij = {i1, i2, . . . , im} using the WordPiece tokenizer (Wu et al., 2016) and process the sequence as: [CLS] Ig [SEP] Ij [SEP], using token type ids to distinguish the goal and step instructions. We derive token embeddings L \u2208R(m+n+3)\u00d7de using the BERT (Devlin et al., 2019) embedding layer, where de is the embedding dimensionality. We provide EmBERT with object-centric representations by using MaskRCNN (He et al., 2017) features to represent detected objects in every frame of the panorama view. We freeze the weights of a MaskRCNN model \ufb01ne-tuned for AI2-THOR frames (Singh et al., 2020). We \ufb01x the number of object detections in the front view VF to 36, while limiting those in the side views to 18. We represent each object o \u2208O as an embedding o \u2208Rdo, which is a concatenation of: 1) detection ResNet (He et al., 2016) features; 2) bounding box coordinates; 3) bounding box relative area; and \ft=28 Pickup [CLS] Put two sets ... [SEP] pick up the ... ... Goal Instruction Step Instruction [SEP] Object Detections BERT Embedding Layer Object Embedding MLP ... ... ... OSCAR Multimodal Transformer Backbone ... ... ... O L H H0 Hm+n ht ht-1 ht-2 ht-3 ... ... si si-1 Transformer Dec A, at-1 h\u0303t h\u0303t-1 h\u0303t-2 h\u0303t-3 ... ... Action Prediction (Eq 1) Object Prediction (Eqs 2 and 3) Object Receptacle Prediction Visual Region Classi\ufb01cation Next Instruction Step Prediction Yes | No Table Keys Book Pillow ... * at: Figure 3: Proposed Embodied BERT model. A multimodal encoder embeds goaland step-level instructions alongside object detections from a panoramic view around the agent. This encoder produces a temporally independent hidden state ht. A sequence of such hidden states are attended by a segment-level recurrent action decoder to produce time-dependent states \u02dc ht. EmBERT is trained in segments si to balance gradient \ufb02ow over time with memory constraints, and previous segments are cached to be attended over in future timesteps. Time-dependent state \u02dc ht is used to predict the next action, whether to start attending to the next step-by-step instruction, what object to target in the environment, that object\u2019s parent receptacle, and detected object classes. 4) vertical and horizontal heading of the object related to the current agent position, following prior work (Storks et al., 2021). These representations make up the observed object embeddings O. We use a one layer MLP to map object embeddings of dimensionality do to size de.1 The multi-modal transformer backbone consumes the token and object embeddings to produce multi-modal hidden states H \u2208Rm+n+|O|\u00d7de. We obtain these state representations, ht, for each timestep t by computing an element-wise product between H0 and Hm+n, the hidden state of the [CLS] token and the last [SEP] token placed between language tokens and objects, similar in spirit to the approach described in (Zhou et al., 2020). In this way, we can generate temporally independent agent states for an entire trajectory resulting in a sequence of states {h1, h2, . . . , h|T |}. 4.2 Segment-Level Recurrent Action Decoder The ALFRED challenge requires models to learn to complete action sequences averaging 50 steps and spanning multiple navigation and manipula1In our experiments, in order to reuse the visual embedding available in the OSCAR checkpoint, we use an additional one layer MLP to adapt our visual features to the visual embeddings space learned by OSCAR. tion sub-goals. However, due to the quadratic complexity of the self-attention mechanism, feeding long sequences to transformers is computationally expensive (Beltagy et al., 2020). Inspired by the TransformerXL model (Dai et al., 2019), we design the Segment-Level Recurrent Action Decoder architecture that models long trajectories with recurrent segment-level state reuse. At training time we divide trajectories into temporal segments of size s. Given two consecutive segments, si and si+1, EmBERT caches the representations generated for segment si. The computed gradient does not \ufb02ow from si+1 to si, but cached representations are used as extended context. When predicting the next action, the model can still perform self-attention over the previous segment representations, effectively incorporating additional contextual information that spans an high number of previous timesteps. The TransformerXL model is intended as an encoder-only architecture which is not able to perform cross-attention with some encoder hidden states. Therefore, we introduce two novel elements to its architecture: 1) encoder hidden states cache; 2) cross-attention over encoder states. First, our extended context is composed of both agent state representations and hidden states from the previous segment si. In addition, to perform \fcross-attention between decoder and encoder hidden states, we modify the TransformerXL selfattention mechanism following common practice in designing transformer decoders (Vaswani et al., 2017). EmBERT encodes the previous actions for the current timestep at\u22121 and extracts an action embedding at from a learnable embedding matrix A \u2208R|A|\u00d7da. In the TransformerXL\u2019s multi-head self-attention layers, we generate keys and values from the agent state representations (encoder) and queries from the action embeddings (decoder). We obtain time-dependent agent state representations {\u02dc h1, \u02dc h2, . . . , \u02dc h|T |} as output. Given time-dependent hidden states, the model predicts action and object mask outputs. We learn a probability distribution over the agent actions A by using a two layer feedforward network (FFN) with dropout and GeLU (Hendrycks and Gimpel, 2016) activation receiving the hidden state \u02dc ht for the timestep t: \u02dc h1 t = GeLU(\u02dc htW1) P(at|\u02dc ht) = softmax(\u02dc h1 t W2), (1) where W1 \u2208Rde\u00d7de and W2 \u2208Rde\u00d7|A| are two weight matrices. We use sequence-based crossentropy loss (Sutskever et al., 2014), LA, to supervise the action prediction task. In addition, we derive time-dependent \ufb01ne-grained representations of token and object embeddings. We use conditional scaling (Dumoulin et al., 2018) to fuse the decoder hidden state \u02dc ht with the embedding H to produce the time-dependent embeddings \u02dc H: \u02dc c = Wt\u02dc h \u02dc Hi = \u02dc c \u00b7 Hi,i={1,...,(m+n+|O|)} , (2) where Wt \u2208Rde\u00d7de is a weight matrix used to adapt the representation of the original decoder hidden state \u02dc h. We predict target objects by selecting one bounding box among the detections in VF for manipulation actions, or any view for navigation actions. We treat object mask prediction as a classi\ufb01cation task where the model \ufb01rst extracts time-dependent object embeddings \u02dc O = \u02dc Hi, i = {(m + n), . . . , (m + n + |O|)}, and then generates logits for each object as follows: \u02dc o1 i = GeLU( \u02dc OW1 o) P(oi| \u02dc Oi) = softmax(\u02dc o1 i W2 o), (3) where W1 o \u2208Rde\u00d7de and W2 o \u2208Rde\u00d71 are two weight matrices. At training time, we determine the target object by using the Intersection-Over-Union score between the predicted object masks generated by MaskRCNN for each object and the gold object mask. To supervise this classi\ufb01cation task, we use sequence-based cross-entropy loss, LO. 4.3 Auxiliary tasks During the EmBERT training, we jointly optimize LA, LO, and several auxiliary tasks. Next Instruction Prediction. Several existing models for ALFRED encode the sequence of language instructions I together with the goal (Table 1), or concatenate step-by-step instructions. These simpli\ufb01cations can prevent the model from carefully attending to relevant parts of the visual scene. EmBERT takes the \ufb01rst instruction at time t = 0, and performs add an auxiliary prediction task to advance from instruction Ij to instruction Ij+1. To supervise the next-instruction decision, we create a binary label for each step of the trajectory that indicates whether that step is the last step for a speci\ufb01c sub-goal, as obtained by ma(i). We use a similar FNN as Equation 1to model a Bernoulli variable used to decide when to advance to the next instruction. We denote the binary crossentropy loss used to supervise this task as LINST . Object Target Predictions. EmBERT predicts a target object for navigation actions, together with the receptacle object containing the target, for example a table on which a box sits (Figure 2). For these tasks, we use an equivalent prediction layer to the one used for object prediction. We denote the cross-entropy loss associated with these task by LNAV and LRECP . Visual Region Classi\ufb01cation. Classconditioned representations are useful for agent manipulation, especially when combined with hand-crafted procedures for object selections (Singh et al., 2020). Inspired by masked region modeling tasks (Chen et al., 2020b; Shrivastava et al., 2021), we select with %15 probability some objects part of the agent view in a given timestep t and we ask the model to predict their classes. Given the instruction Turn around and walk to the book on the desk, at the very \ufb01rst timestep of the trajectory it is likely that none of the mentioned objects are visible. Thus, we assume that at the last step of a sub-goal the agent will have in view the objects associated with the instruction. For the prediction task, we directly use the time-dependent object embeddings \u02dc O and use an FFN (similar to Equation 1) to estimate a probability distribution over the ALFRED object labels. We use a cross-entropy loss denoted by LV RC as supervision for this task. \fLeaderboard Test Fold Performance Seen Unseen Model Task (PLW) GC (PLW) Task (PLW) GC (PLW) SEQ2SEQ (Shridhar et al., 2020) 3.98 ( 2.02) 9.42 ( 6.27) .39 ( 0.08) 7.03 ( 4.26) HITUT (Zhang and Chai, 2021) 21.27 (11.10) 29.97 (17.41) 13.87 ( 5.86) 20.31 (11.51) MOCA (Singh et al., 2020) 22.05 (15.10) 28.29 (22.05) 5.30 ( 2.72) 14.28 ( 9.99) HLSM (Blukis et al., 2021) 25.11 ( 6.69) 35.79 (11.53) 16.29 ( 4.34) 27.24 ( 8.45) LWIT (Nguyen et al., 2021) 30.92 (25.90) 40.53 (36.76) 9.42 ( 5.60) 20.91 (16.34) EMBERT 31.77 (23.41) 39.27 (31.32) 7.52 ( 3.58) 16.33 (10.42) ET (Pashevich et al., 2021) 38.42 (27.78) 45.44 (34.93) 8.57 ( 4.10) 18.56 (11.46) ABP (Kim et al., 2021) 44.55 ( 3.88) 51.13 ( 4.92) 15.43 ( 1.08) 24.76 ( 2.22) Table 2: Test Fold Performance. Path weighted metrics are given in parentheses. 5 Experiments and Results EmBERT achieves competitive performance with state of the art models on the ALFRED leaderboard test sets (Table 2), surpassing all but ET (Pashevich et al., 2021) and ABP (Kim et al., 2021) on Seen test fold performance (Table 3) at the time of writing. Notably, EmBERT achieves this performance without augmenting ALFRED data with additional language instructions, as is done in ET (Pashevich et al., 2021), or visual distortion as used in ABP (Kim et al., 2021). Implementation Details. EmBERT is implemented using AllenNLP (Gardner et al., 2017), PyTorch-Lightning,2 and HuggingfaceTransformers (Wolf et al., 2019). We train using the Adam optimizer with weight \ufb01x (Loshchilov and Hutter, 2017), learning rate 2e\u22125, and linear rate scheduler without warmup steps. We use dropout of 0.1 for the hidden layers of the FFN modules and gradient clipping of 1.0 for the overall model weights. Our TransformerXL-based decoder is composed of 2 layers, 8 attention heads, and uses a memory cache of 200 slots. At training time, we segment the trajectory into 10 timesteps. In order to optimize memory consumption, we use bucketing based on the trajectory length. We use teacher forcing (Williams and Zipser, 1989) to supervise EmBERT during the training process. To decide when to stop training, we monitor the average between action and object selection accuracy for every timestep based on gold trajectories. The best epoch according to that metric computed on the validation seen set is used for evaluation. The total time for each epoch is about 1 hour for a total of 20 hours for each model con\ufb01guration using EC2 instances p3.8xlarge using 1 GPU. Action Recovery Module. For obstacle avoidance, if a navigation action fails, for example the 2https://www.pytorchlightning.ai/ agent choosing MoveAhead when facing a wall, we take the next most con\ufb01dent navigation action at the following timestep, as in MOCA (Singh et al., 2020). We introduce an analogous object interaction recovery procedure. When the agent chooses an interaction action such as Slice, we \ufb01rst select the bounding box of highest con\ufb01dence to retrieve an object interaction mask. If the resulting API action fails, for example if the agent attempts to Slice a Kettle object, we choose the next highest con\ufb01dence bounding box at the following timestep. The ALFRED challenge ends an episode when an agent causes 10 such API action failures. Comparison to Other Models. Table 2 gives EmBERT performance against top and baseline models on the ALFRED leaderboard at the time of writing. Seen and Unseen sets refer to tasks in rooms that were or were not seen by the agent at training time. We report Task success rate and Goal Conditioned (GC) success rate. Task success rate is the average number of episodes completed successfully. Goal conditioned success rate is more forgiving; each episode is scored in [0, 1] based on the number of subgoals satis\ufb01ed, for example, in a STACK & PLACE task if one of two mugs are put on a table, the GC score is 0.5 (Shridhar et al., 2020). Path weighted success penalizes taking more than the number of expert actions necessary for the task. EmBERT outperforms MOCA (Singh et al., 2020) on Unseen scenes, and several models on Seen scenes. The primary leaderboard metric is Unseen success rate, measuring models\u2019 generalization abilities. Among competitive models, EmBERT outperforms only MOCA at Unseen generalization success. Notably, EmBERT remains competitive on Unseen path-weighted metrics, because it does not perform any kind of exploration or mapping as in HLSM (Blukis et al., 2021) and \fValidation Fold Performance EMBERT Seen Unseen Init Weights #SB Mem Nav O P(O) VRC Task GC Task GC OSCAR 18 200 \u0013 \u0013 \u0013 28.54 (22.88) 38.69 (31.28) 1.46 ( .72) 10.19 ( 6.25) OSCAR 18 200 \u0013 \u0013 34.76 (28.46) 41.30 (35.50) 3.66 ( 1.55) 12.61 ( 7.49) OSCAR 18 200 \u0013 \u0013 36.22 (27.05) 44.57 (35.23) 4.39 ( 2.21) 13.03 ( 7.54) OSCAR 18 200 \u0013 37.44 (28.81) 44.62 (36.41) 5.73 ( 3.09) 15.91 ( 9.33) OSCAR 18 200 23.66 (17.62) 29.97 (24.16) 2.31 ( 1.24) 12.08 ( 7.62) BERT 18 200 \u0013 \u0013 26.46 (19.41) 35.70 (27.04) 3.53 ( 1.77) 13.02 ( 7.57) OSCAR 9 200 \u0013 \u0013 \u0013 29.30 (20.14) 36.28 (27.21) 3.06 ( 1.13) 12.17 ( 6.69) OSCAR 9 200 \u0013 \u0013 31.75 (23.52) 38.80 (32.21) 2.56 ( 1.28) 12.97 ( 8.24) OSCAR 9 200 \u0013 \u0013 20.37 (16.30) 28.64 (23.11) 1.46 ( 0.75) 10.47 ( 6.26) OSCAR 9 200 \u0013 28.33 (20.77) 36.83 (28.03) 2.68 ( 1.18) 11.60 ( 6.78) OSCAR 9 200 27.84 (20.66) 36.59 (27.97) 2.44 ( 1.06) 11.46 ( 6.76) OSCAR 0 200 \u0013 \u0013 25.31 (18.79) 34.27 (26.09) 3.42 ( 1.49) 12.25 ( 7.34) OSCAR 9 1 \u0013 \u0013 20.98 (13.98) 33.33 (22.74) 1.10 ( 0.60) 10.33 ( 4.69) OSCAR 18 1 \u0013 \u0013 21.95 (12.99) 35.04 (22.31) 1.58 ( .54) 11.08 ( 6.18) MOCA (Singh et al., 2020) 18.90 (13.20) 28.02 (21.81) 3.65 ( 1.94) 13.63 ( 8.50) Table 3: Validation Fold Performance. We present ablations adjusting the number of side-view bounding boxes, attended memory length, with and without predicting navigation target O, target parent object P(O), and visual region classi\ufb01cation (VRC) loss. We also explore initializing our multi-modal encoder with BERT versus OSCAR initialization. The highest values per fold and metric are shown in blue. Path weighted metrics are given in parenthesis. ABP (Kim et al., 2021). We do not utilize the MOCA Instance Association in Time module (Singh et al., 2020) that is mimicked by ET (Pashevich et al., 2021). That module is conditioned based on the object class of the target object selected across timesteps. Because we directly predict object instances without conditioning on a predicted object class, our model must learn instance associations temporally in an implicit manner, rather than using such an inference time \u201c\ufb01x\u201d. EmBERT Ablations. Removing the objectcentric navigation prediction unique to EmBERT decreases performance on all metrics (Table 3). We show that limiting memory for the action decoder to a single previous timestep, initializing with BERT rather than OSCAR weights, and limiting vision to the front view all decrease performance in both Seen and Unseen folds. We \ufb01nd that our parent prediction and visual region classi\ufb01cation losses, however, do not improve performance. To investigate whether a smaller model would bene\ufb01t more from these two auxiliary losses, we ran EmBERT with only 9 bounding boxes per side view, which enables \ufb01tting longer training segments in memory (we use 14 timesteps, rather than 10). We found that those losses improved EmBERT performance on the Unseen environment via both success rate and goal conditions metrics, and improved success rate alone in Seen environments when the non-frontal views were limited to 9, rather than 18, bounding boxes. Given the similar performance of EmBERT with all three auxiliary losses at 18 and 9 side views, we believe EmBERT is over-parameterized with the additional losses and 18 side view bounding boxes. It is possible that data augmentation efforts to increase the volume of ALFRED training data, such as those in ET (Pashevich et al., 2021), would enable us to take advantage of the larger EmBERT con\ufb01guration. 6 Conclusions We apply the insight that object-centric navigation is helpful for language-guided Embodied AI to a benchmark of tasks in home environments. Our proposed Embodied BERT (EmBERT) model adapts the pretrained language model transformer OSCAR (Li et al., 2020), and we introduce a decoupled transformer embedding and decoder step to enable attending over many features per timestep as well as a history of previous embedded states (Figure 1). EmBERT is the \ufb01rst to bring objectcentric navigation to bear on language-guided, manipulation and navigation-based task completion. We \ufb01nd that EmBERT\u2019s object-centric navigation and ability to attend across a long time horizon both contribute to its competitive performance with state-of-the-art ALFRED models (Table 3). Moving forward, we will apply EmBERT to other benchmarks involving multimodal input \fthrough time, such as vision and audio data (Chen et al., 2020a), as well as wider arrays of tasks to accomplish (Puig et al., 2018). To further improve performance on the ALFRED benchmark, we could conceivably continue training the Mask RCNN model from MOCA (Singh et al., 2020) forever by randomizing scenes in AI2THOR (Kolve et al., 2017) and having the agent view the scene from randomized vantage points with goldstandard segmentation masks available from the simulator. For language supervision, we could train and apply a speaker model for ALFRED to generate additional training data for new expert demonstrations, providing an initial multimodal alignment for EmBERT, a strategy shown effective in VLN tasks (Fried et al., 2018). 7 Implications and Impact We evaluated EmBERT only on ALFRED, whose language directives are provided as a one-sided \u201crecipe\u201d accomplishing a task. The EmBERT architecture is applicable to single-instruction tasks like VLN, as long as auxiliary navigation object targets can be derived from the data as we have done here for ALFRED, by treating the \u201crecipe\u201d of step-by-step instructions as empty. In future work, we would like to incorporate our model on navigation tasks involving dialogue (Thomason et al., 2019; de Vries et al., 2018) and real robot platforms (Banerjee et al., 2020) where lifelong learning is possible (Thomason et al., 2015; Johansen et al., 2020). Low-level physical robot control is more dif\ufb01cult than the abstract locomotion used in ALFRED, and poses a separate set of challenges (Blukis et al., 2019; Anderson et al., 2020). By operating only in simulation, our model also misses the full range of experience that can ground language in the world (Bisk et al., 2020), such as haptic feedback during object manipulation (Thomason et al., 2020, 2016; Sinapov et al., 2014), and audio (Chen et al., 2020a) and speech (Harwath et al., 2019; Ku et al., 2020) features of the environment. Further, in ALFRED an agent never encounters novel object classes at inference time, which represent an additional challenge for successful task completion (Suglia et al., 2020). The ALFRED benchmark, and consequently the EmBERT model, only evaluates and considers written English. EmBERT inherently excludes people who cannot use typed communication. By training and evaluating only on English, we can only speculate whether the object-centric navigation methods introduced for EmBERT will generalize to other languages. We are cautiously optimistic that, with the success of massively multi-lingual language models (Pires et al., 2019), EmBERT would be able to train with non-English language data. At the same time, we acknowledge the possibility of pernicious, inscrutable priors and behavior (Bender et al., 2021) and the possibility for targeted, language prompt-based attacks (Song et al., 2021) in such large-scale networks."
+ },
+ {
+ "url": "http://arxiv.org/abs/2011.02917v1",
+ "title": "Imagining Grounded Conceptual Representations from Perceptual Information in Situated Guessing Games",
+ "abstract": "In visual guessing games, a Guesser has to identify a target object in a\nscene by asking questions to an Oracle. An effective strategy for the players\nis to learn conceptual representations of objects that are both discriminative\nand expressive enough to ask questions and guess correctly. However, as shown\nby Suglia et al. (2020), existing models fail to learn truly multi-modal\nrepresentations, relying instead on gold category labels for objects in the\nscene both at training and inference time. This provides an unnatural\nperformance advantage when categories at inference time match those at training\ntime, and it causes models to fail in more realistic \"zero-shot\" scenarios\nwhere out-of-domain object categories are involved. To overcome this issue, we\nintroduce a novel \"imagination\" module based on Regularized Auto-Encoders, that\nlearns context-aware and category-aware latent embeddings without relying on\ncategory labels at inference time. Our imagination module outperforms\nstate-of-the-art competitors by 8.26% gameplay accuracy in the CompGuessWhat?!\nzero-shot scenario (Suglia et al., 2020), and it improves the Oracle and\nGuesser accuracy by 2.08% and 12.86% in the GuessWhat?! benchmark, when no gold\ncategories are available at inference time. The imagination module also boosts\nreasoning about object properties and attributes.",
+ "authors": "Alessandro Suglia, Antonio Vergari, Ioannis Konstas, Yonatan Bisk, Emanuele Bastianelli, Andrea Vanzo, Oliver Lemon",
+ "published": "2020-11-05",
+ "updated": "2020-11-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CV",
+ "cs.LG"
+ ],
+ "main_content": "Introduction Humans do not learn conceptual representations from language alone, but from a wide range of situational information (Beinborn et al., 2018; Bisk et al., 2020) as highlighted also by property-listing experiments (McRae et al., 2005). When humans experience the concept of \u201cboat\u201d, they simulate a new representation by reactivating and aggregating multi-modal representations that reside in their memory and are associated with the concept of \u201cboat\u201d (e.g., what a boat looks like, the action of sailing, etc) (Barsalou, 2008). This simulation process is called perceptual simulation. Therefore, it is no wonder that recent trends in learning conceptual representations adopt multi-modal and holistic approaches (Bruni et al., 2014) wherein abstract distributional lexical representations (Landauer and Dumais, 1997; Laurence and Margolis, 1999) learned from text corpora are augmented or re\ufb01ned with perceptual information for concrete and context-aware representations built from visual (Kiela et al., 2018; Lazaridou et al., 2015), olfactory (Kiela et al., 2015), or auditory (Kiela and Clark, 2015) modalities. Language games between AI agents, inspired by Wittgenstein\u2019s Language Games among humans (Wittgenstein et al., 1953), are an excellent test bed for such approaches since concepts are expected to emerge when agents are required to communicate to solve speci\ufb01c tasks in speci\ufb01c environments. GuessWhat?! (De Vries et al., 2017) is a prototypical language game of this kind: a Guesser has to identify a target object in a scene represented as an image by asking questions to an Oracle. Learning to ground pixels of the scene into object representations that are relevant for the object category they belong to (category-aware), but are also particularized for the speci\ufb01c scene (context-aware), is fundamental for the Guesser to effectively converse with the Oracle and vice-versa. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. arXiv:2011.02917v1 [cs.CL] 5 Nov 2020 \fTraining time Inference time person donut donut donut person z A COnicbVDLbtRAEBwnQIJ5ZcMNLiNWSJxWNlo JcovgkmOQ2CRibUXtcXszyszYmnDLiP/Ra7 Jl+RHcuWGuOYDMvs4wIaSWipVdatLVTRKOkqS m2hj8HDR1vbj+MnT589f7HT2z1ydWsFjkSt antSgEMlDY5IksKTxiLoQuFxcf57h9/R+tk b7SrMFcw8TISgqgIH3LNBZUfmf3elOPxk C/D7JF2RPlvh8LQXvcrKWrQaDQkFzo3TpKHcg yUpFHZx1jpsQJzDBMeBGtDocr+I3PG3QSl5V dswhvhC/fvCg3ZupouwOY/o1r25+F+v0Gufq fqYe2maltCI5eOqVZxqPu+Dl9KiIDULBISVI TsXZ2BUGgtjODP0StNZjSZ2AnGqbdOM191 poyLCD5ftr5jHBKfmnzruvi0Ga63t19cvR+kA 4He1+G/f1Pq1632Wv2hr1jKfvA9tkBO2QjJp hF+ySXUX0a/od/RnuboRrW5esn8Q3d4BOI +uOA= c A COnicbVBNaxRBEO1J1MTxaxNvemlcBE/LjCx Eb8FcPEZwk+DOsNT01GyadPcM3TUmSzP/wmv 8JfkjuXqTXPMD7P046K4PCh7vVGPVzRKOkqS 2hr+8HDRzu7j+MnT589f9Hb2z9xdWsFjkSt antWgEMlDY5IksKzxiLoQuFpcXE090+/o3Wy Nl9p1mCuYWpkJQVQkL5lGui8qLzoJr1+MkgW 4JskXZE+W+F4she9yspatBoNCQXOjdOkodyDJ SkUdnHWOmxAXMAUx4Ea0Ohyv4jc8bdBKXlV2 zCG+EL9+8KDdm6mi7A5j+jWvbn4X6/Qa5+p+ pB7aZqW0Ijl46pVnGo+74OX0qIgNQsEhJUhO xfnYEFQaC2OM4OXotYaTOkzsFMNV904zX3Wm jIsIPl+2vmM8Ir80uZd18WhzXS9u01y8n6QDg cfvwz7h59Wve6y1+wNe8dSdsAO2Wd2zEZM N+sGv2M7qJfkW/o7vl6la0unJ/kF0/wcPDa 4h c ACOnicbVBNaxRBEO1J1 MTxaxNvemlcBE/LjCxEb8FcPEZwk+DOsNT01GyadPcM3TUmSzP/wmv8JfkjuXqTXPMD7P046K4PCh7vVGPVzRKOkqS2hr+8HDRzu7j+MnT589f9Hb2z9xdWsFjkStantWgEMlDY5IksKzxi LoQuFpcXE090+/o3WyNl9p1mCuYWpkJQVQkL5lGui8qLzoJr1+MkgW4JskXZE+W+F4she9yspatBoNCQXOjdOkodyDJSkUdnHWOmxAXMAUx4Ea0Ohyv4jc8bdBKXlV2zCG+EL9+8KDdm6mi7A 5j+jWvbn4X6/Qa5+p+pB7aZqW0Ijl46pVnGo+74OX0qIgNQsEhJUhOxfnYEFQaC2OM4OXotYaTOkzsFMNV904zX3WmjIsIPl+2vmM8Ir80uZd18WhzXS9u01y8n6QDgcfvwz7h59Wve6y1+wN e8dSdsAO2Wd2zEZMN+sGv2M7qJfkW/o7vl6la0unJ/kF0/wcPDa4h ? ? ? z ACOnicbVDLbtRAEBwnQ IJ5ZcMNLiNWSJxWNloJcovgkmOQ2CRibUXtcXszyszYmnDLiP/Ra7Jl+RHcuWGuOYDMvs4wIaSWipVdatLVTRKOkqSm2hj8HDR1vbj+MnT589f7HT2z1ydWsFjkStantSgEMlDY5IksKTxi LoQuFxcf57h9/R+tkb7SrMFcw8TISgqgIH3LNBZUfmf3elOPxkC/D7JF2RPlvh8LQXvcrKWrQaDQkFzo3TpKHcgyUpFHZx1jpsQJzDBMeBGtDocr+I3PG3QSl5VdswhvhC/fvCg3Zupou wOY/o1r25+F+v0GufqfqYe2maltCI5eOqVZxqPu+Dl9KiIDULBISVITsXZ2BUGgtjODP0StNZjSZ2AnGqbdOM191poyLCD5ftr5jHBKfmnzruvi0Ga63t19cvR+kA4He1+G/f1Pq1632Wv2 hr1jKfvA9tkBO2QjJphF+ySXUX0a/od/RnuboRrW5esn8Q3d4BOI+uOA= c ACOnicbVBNaxRBEO1J1 MTxaxNvemlcBE/LjCxEb8FcPEZwk+DOsNT01GyadPcM3TUmSzP/wmv8JfkjuXqTXPMD7P046K4PCh7vVGPVzRKOkqS2hr+8HDRzu7j+MnT589f9Hb2z9xdWsFjkStantWgEMlDY5IksKzxi LoQuFpcXE090+/o3WyNl9p1mCuYWpkJQVQkL5lGui8qLzoJr1+MkgW4JskXZE+W+F4she9yspatBoNCQXOjdOkodyDJSkUdnHWOmxAXMAUx4Ea0Ohyv4jc8bdBKXlV2zCG+EL9+8KDdm6mi7A 5j+jWvbn4X6/Qa5+p+pB7aZqW0Ijl46pVnGo+74OX0qIgNQsEhJUhOxfnYEFQaC2OM4OXotYaTOkzsFMNV904zX3WmjIsIPl+2vmM8Ir80uZd18WhzXS9u01y8n6QDgcfvwz7h59Wve6y1+wN e8dSdsAO2Wd2zEZMN+sGv2M7qJfkW/o7vl6la0unJ/kF0/wcPDa4h donut pasticciotto z ACOnicbVDLbtRAEBwnQ IJ5ZcMNLiNWSJxWNloJcovgkmOQ2CRibUXtcXszyszYmnDLiP/Ra7Jl+RHcuWGuOYDMvs4wIaSWipVdatLVTRKOkqSm2hj8HDR1vbj+MnT589f7HT2z1ydWsFjkStantSgEMlDY5IksKTxi LoQuFxcf57h9/R+tkb7SrMFcw8TISgqgIH3LNBZUfmf3elOPxkC/D7JF2RPlvh8LQXvcrKWrQaDQkFzo3TpKHcgyUpFHZx1jpsQJzDBMeBGtDocr+I3PG3QSl5VdswhvhC/fvCg3Zupou wOY/o1r25+F+v0GufqfqYe2maltCI5eOqVZxqPu+Dl9KiIDULBISVITsXZ2BUGgtjODP0StNZjSZ2AnGqbdOM191poyLCD5ftr5jHBKfmnzruvi0Ga63t19cvR+kA4He1+G/f1Pq1632Wv2 hr1jKfvA9tkBO2QjJphF+ySXUX0a/od/RnuboRrW5esn8Q3d4BOI+uOA= Figure 1: Common approaches to visual grounding such as De Vries et al. (2017) and Zhuang et al. (2018) rely on gold category labels at test time, thereby failing to ground novel objects from categories not seen during training (e.g., a \u201cpasticciotto\u201d, top right) or to properly encode known categories but with unseen visual features (like a \u201cfrosted donut\u201d, bottom right) since they employ category embeddings c from a prede\ufb01ned set that are \ufb01xed for each object. Instead, embeddings z learned by our imagination module can be \ufb02exibly category-aware allowing them to generalize to unseen categories. We consider a model truly multi-modal if it always uses all the modalities to make decisions. However, existing approaches (De Vries et al., 2017; Shekhar et al., 2019) rely instead on gold category labels that are assumed to be available also at inference time, thus making these models depend on this modality and discarding the others. This not only poses an unnatural performance advantage for players in controlled benchmark scenarios like the GuessWhat?! game when categories at inference time match those at training time, but causes them to fail in more realistic zero-shot scenarios (Suglia et al., 2020) where players are required to generalize to out-of-domain object categories. For example, consider an agent that during training has only seen glazed donuts, associated with the \ufb01xed \u201cdonut\u201d category embedding (cf. Figure 1). At inference time, the model cannot ground visual representations for objects belonging to the \u201cpasticciotto\u201d (an Italian pastry) category, since such a category was not in its repertoire. Similarly, it will likely represent frosted donuts with a generic \u201cdonut\u201d embedding, despite the perceptual differences among different types of donut. In this paper, we tackle the above limitations by introducing a novel imagination module based on Regularized Auto-encoders (Ghosh et al., 2019), which are able to derive imagination embeddings directly from perceptual information in the form of the object crop. Our formulation of the reconstruction loss allows the model to learn context-aware and category-aware imagination embeddings. Thus, removing the need for gold category labels at inference time and greatly improving zero-shot generalization. Section 4.2 integrates our imagination component into the Oracle model of De Vries et al. (2017) and the Guesser model of Shekhar et al. (2019). We show that the new imagination models are state-of-the-art in the recently introduced CompGuessWhat?! benchmark (Suglia et al., 2020) outperforming current models by 8.26%. It also improves the Oracle\u2019s and Guesser\u2019s accuracy (by 2.08% and 12.86%, respectively) in the standard GuessWhat?! when no gold category labels are available. Lastly, we show that imagining latent object representations greatly helps to reason about object visual properties (i.e., color, shape, etc.), qualifying our module as a generic perceptual simulation component al` a Barsalou (2008). 2 Background: Guessing Games and Concept Representations GuessWhat?! is an instance of a multi-word guessing game (Steels, 2015). Every game involves two players: an Oracle and a Guesser conversing about a scene S (a natural image). A scene S can be \fabstracted into a collection of objects O, each of which is associated with a category ci \u2208C, i = {1, . . . , K}. The aim of the Guesser is to identify a target object o\u2217\u2208O by asking questions about S to the Oracle. The gameplay of GuessWhat?! thus comprises three tasks: i) question generation where the Guesser inquires about an object in the scene S given the dialogue generated so far; ii) answer prediction, where the Oracle answers a \u2208A = {Yes, No, N/A} given the scene S, question and the target object o\u2217; and iii) target prediction where the Guesser selects a candidate object with the highest relevance score r(oi). Several architectural variants have been proposed to tackle GuessWhat?! (cf. Section 5 for some related works). In this work we adopt the recent GDSE model (Shekhar et al., 2019), which learns a visually grounded dialogue state used to learn both question generation and target object prediction. As shown below, GDSE does not deliver the desired multi-modality needed, therefore we extend it with our Imagination component to obtain more effective multi-modal object representations. For successful gameplay, both the Guesser and Oracle must build representations of the scene that contain speci\ufb01c perceptual information of objects (object-aware), are relevant for the object category they belong to (category-aware), and are specialized to the scene in which the game is played (contextaware). As the scene S is an image, it is natural to associate each object oi \u2208O with a perceptual embedding, i.e., a vector vi \u2208RdO extracted from the penultimate layer of a pretrained vision model (e.g. ResNet-152 (Shekhar et al., 2019)) based on their bounding box.1 However, these representations are not suf\ufb01cient as they are neither context-aware nor category-aware, i.e., they ignore other objects in the scene and do not leverage their category information. GDSE and other recent approaches (De Vries et al., 2017; Shekhar et al., 2019; Zhuang et al., 2018; Shukla et al., 2019) coped with the second issue by introducing category embeddings as dC-dimensional continuous representations ck \u2208RdC for k = 1, . . . , K. Once learned, a category embedding c is then concatenated to an 8-dimensional feature vector si derived from the object bounding box (cf. De Vries et al. (2017)). While these embeddings partially solve category-awareness, they are not object-aware. For instance, the embedding for the object category \u201capple\u201d will be the same regardless of a particular object to be a red or green apple, i.e., most likely a centroid representation of the objects seen only during training. Moreover, if during training we only see red apples, at inference time, we will likely fail to detect green apples as belonging to the same category (Figure 2(a)). These issues have gone unnoticed since category embeddings usually boost performances on the original GuessWhat?! task, given that gold category labels are also available at inference time. However, this boost is illusory: models relying on this symbolic information to be always available are not learning to exploit all modalities. In fact, a 20% drop in the Guesser accuracy if gold category labels are not provided has been reported in Zhuang et al. (2018) for GuessWhat?! and analogous poor results in more realistic benchmarks measuring zero-shot generalization such as CompGuessWhat?! (Suglia et al., 2020). 3 Imagination Module: Learning Contextand Category-aware Object Representations To overcome the limitations of GDSE and competitors and realize a form of perceptual simulation in a learning system, we introduce a generic component\u2014named the imagination module\u2014which learns latent concept representations that are both contextand category-aware, without relying on category labels at inference time. Our imagination model can be understood in the context of representation learning via deep generative models (Bengio et al., 2013) which has been popularized by variational autoencoders (VAEs) (Kingma and Welling, 2013; Kingma et al., 2014), and GANs (Goodfellow et al., 2014). Speci\ufb01cally, we substantially extend the recently introduced regularized autoencoders (RAEs) framework (Ghosh et al., 2019). RAEs are simpli\ufb01ed VAEs where stochasticity in the encoder and decoder is dropped in favor of more stable training and more informative embedding learning. In fact, RAEs do not suffer from several issues known to affect VAEs, such as poor convergence and the possibility of learning embeddings that are independent of the input images (cf. Ghosh et al. (2019) for a detailed discussion). More crucially for our purposes, RAEs do not have to compromise the informativeness of the learned embeddings with a \ufb01xed a-priori structure in the latent space that enables simple 1Bounding boxes are assumed to be given, e.g. by using object recognition as a pre-processing step (Anderson et al., 2018). \fD\u2713( ACO3icbVBNaxRBEO2JiYlj1ERvemyCDktMxLQ3IJ68JiAmwR2hqWmp3a3SX8M3TWapZmf4V/iT/Eszfxmru9HwezyYOCx3uvq OJVjZKesuxXsvFgc+vh9s6j9PHuk6fP9vafn3vbOoEDYZV1lxV4VNLgCQpvGwcgq4UXlRXH+b+xRd0XlrzmWYNlhomRo6lAIrS8OM oFDRFgu5wtNfL+tkC/C7JV6THVjgd7Scvi9qKVqMhocD7YZ41VAZwJIXCLi1ajw2IK5jgMFIDGn0ZFj93/HVUaj62Lo4hvlD/3wigv Z/pKiY10NSve3PxXq/Sa5dp/K4M0jQtoRHLw+NWcbJ8XgivpUNBahYJCfj71xMwYGgWFuaFga/Cqs1mDoU4CYarthXoaiNXUMI Ve3sUS8ZrC0uZd16WxzXy9u7vk/E0/P+ofnx31Tt6vet1hr9gBO2Q5e8tO2Cd2ygZMu+se/sR/Iz+Z38Sf4uoxvJaucFu4Xk5h9N FK43 ) ACMXicbVBNSxBEO0xMerExK9bcmyCOayzMhC9CZ68aiQVW FnkJqe2rWxu2forokuzfwCr8kvya/xJl7zJ9L7cTBrHhQ83ntFa+olXSUJI/R0pu3y+9WVtfi9+sfPm5sbm1fuKqxAvuiUpW9KsChkgb7JEnhVW0RdKHwsrg9mfiXP9A6WZnvNK4x1zAycigFUJDOv15vdpJuMgV/TdI56bA5zq63ok9ZWYlGoyGhwLlBmtSUe7AkhcI2zh qHNYhbGOEgUAMaXe6n7Z8NyglH1Y2jCE+V9ueNDOjXURkhroxi16E/G/XqEXLtPwIPfS1A2hEbPDw0ZxqvikBl5Ki4LUOBAQVobfubgBC4JCWXGcGbwTldZgSp+BHWm4bwdp7rPGlCGA5Dtp6zPCe/Izm7dtG4c208XuXpOL/W7a6x6e9zpHx/NeV9ln9oXtsZR9Y0fsl J2xPhM2QP7yX5Fv6PH6Cl6nkWXovnODvsH0Z+/eYupxw= D\u2713( ACO3icbVBNaxRBEO2JiYlj1ERvemyCDktMxLQ3IJ68JiAmw R2hqWmp3a3SX8M3TWapZmf4V/iT/Eszfxmru9HwezyYOCx3uvqOJVjZKesuxXsvFgc+vh9s6j9PHuk6fP9vafn3vbOoEDYZV1lxV4VNLgCQpvGwcgq4UXlRXH+b+xRd0XlrzmWYNlhomRo6lAIrS8OMoFDRFgu5wtNfL+tkC/C7JV6THVjgd7Scvi9qKVqMhocD7YZ41VA ZwJIXCLi1ajw2IK5jgMFIDGn0ZFj93/HVUaj62Lo4hvlD/3wigvZ/pKiY10NSve3PxXq/Sa5dp/K4M0jQtoRHLw+NWcbJ8XgivpUNBahYJCfj71xMwYGgWFuaFga/Cqs1mDoU4CYarthXoaiNXUMIVe3sUS8ZrC0uZd16WxzXy9u7vk/E0/P+ofnx31Tt6vet1hr9gBO 2Q5e8tO2Cd2ygZMu+se/sR/Iz+Z38Sf4uoxvJaucFu4Xk5h9NFK43 ) ACMXicbVBNSxBEO0xMerExK9bcmyCOayzMhC9CZ68aiQVW FnkJqe2rWxu2forokuzfwCr8kvya/xJl7zJ9L7cTBrHhQ83ntFa+olXSUJI/R0pu3y+9WVtfi9+sfPm5sbm1fuKqxAvuiUpW9KsChkgb7JEnhVW0RdKHwsrg9mfiXP9A6WZnvNK4x1zAycigFUJDOv15vdpJuMgV/TdI56bA5zq63ok9ZWYlGoyGhwLlBmtSUe7AkhcI2zh qHNYhbGOEgUAMaXe6n7Z8NyglH1Y2jCE+V9ueNDOjXURkhroxi16E/G/XqEXLtPwIPfS1A2hEbPDw0ZxqvikBl5Ki4LUOBAQVobfubgBC4JCWXGcGbwTldZgSp+BHWm4bwdp7rPGlCGA5Dtp6zPCe/Izm7dtG4c208XuXpOL/W7a6x6e9zpHx/NeV9ln9oXtsZR9Y0fsl J2xPhM2QP7yX5Fv6PH6Cl6nkWXovnODvsH0Z+/eYupxw= , ACMXicbVBNSxBEO0xMerExK9b cmyCB7CMiML0ZvoxaNCVoWdQWp6atfG7p6huya6NPMLvCa/JL/Gm3jNn0jvx8GseVDweO8VbyiVtJRkjxGS2/eLr9bWV2L369/+LixubV94arGCuyLSlX2qgCHShrskySFV7VF0IXCy+L2ZOJf/kDrZGW+07j GXMPIyKEUQE6/3q92Um6yRT8NUnpMPmOLveij5lZSUajYaEAucGaVJT7sGSFArbOGsc1iBuYSDQA1odLmftry3aCUfFjZMIb4VH254UE7N9ZFSGqgG7foTcT/eoVeuEzDg9xLUzeERswODxvFqeKTGngpLQ pS40BAWBl+5+IGLAgKZcVxZvBOVFqDKX0GdqThvh2kuc8aU4YAku+krc8I78nPbN62bRzaTBe7e0u9rtpr3t43uscHc97XWf2Re2x1L2jR2xU3bG+kwZA/sJ/sV/Y4eo6foeRZdiuY7O+wfRH/+An7yqco= latexit> + ACMXicbVBNSxBE O0xMerExK9bcmyCEJgmZGF6E304lEhq8LOIDU9tWtjd8/QXRNdmvkFXpNfkl/jTbzmT6T342DWPCh4vPeKl5RK+koSR6jpTdvl9+trK7F79c/fNzY3Nq+cFVjBfZFpSp7VYBDJQ3 2SZLCq9oi6ELhZXF7MvEvf6B1sjLfaVxjrmFk5FAKoCdf73e7CTdZAr+mqRz0mFznF1vRZ+yshKNRkNCgXODNKkp92BJCoVtnDUOaxC3MJBoAY0utxP235blBKPqxsGEN8qr7c 8KCdG+siJDXQjVv0JuJ/vUIvXKbhQe6lqRtCI2aHh43iVPFJDbyUFgWpcSAgrAy/c3EDFgSFsuI4M3gnKq3BlD4DO9Jw3w7S3GeNKUMAyXfS1meE9+RnNm/bNg5tpovdvSYX+9201z 0873WOjue9rLP7AvbYyn7xo7YKTtjfSYsgf2k/2KfkeP0VP0PIsuRfOdHfYPoj9/AX0lqck= , ACMXicbVBNSxBEO0xMerExK9b cmyCB7CMiML0ZvoxaNCVoWdQWp6atfG7p6huya6NPMLvCa/JL/Gm3jNn0jvx8GseVDweO8VbyiVtJRkjxGS2/eLr9bWV2L369/+LixubV94arGCuyLSlX2qgCHShrskySFV7VF0IXCy+L2ZOJf/kDrZGW+07j GXMPIyKEUQE6/3q92Um6yRT8NUnpMPmOLveij5lZSUajYaEAucGaVJT7sGSFArbOGsc1iBuYSDQA1odLmftry3aCUfFjZMIb4VH254UE7N9ZFSGqgG7foTcT/eoVeuEzDg9xLUzeERswODxvFqeKTGngpLQ pS40BAWBl+5+IGLAgKZcVxZvBOVFqDKX0GdqThvh2kuc8aU4YAku+krc8I78nPbN62bRzaTBe7e0u9rtpr3t43uscHc97XWf2Re2x1L2jR2xU3bG+kwZA/sJ/sV/Y4eo6foeRZdiuY7O+wfRH/+An7yqco= latexit> ) ACMXicbVBNSxBEO0xMerExK9bcmyCOayzMhC9CZ68aiQVW FnkJqe2rWxu2forokuzfwCr8kvya/xJl7zJ9L7cTBrHhQ83ntFa+olXSUJI/R0pu3y+9WVtfi9+sfPm5sbm1fuKqxAvuiUpW9KsChkgb7JEnhVW0RdKHwsrg9mfiXP9A6WZnvNK4x1zAycigFUJDOv15vdpJuMgV/TdI56bA5zq63ok9ZWYlGoyGhwLlBmtSUe7AkhcI2zh qHNYhbGOEgUAMaXe6n7Z8NyglH1Y2jCE+V9ueNDOjXURkhroxi16E/G/XqEXLtPwIPfS1A2hEbPDw0ZxqvikBl5Ki4LUOBAQVobfubgBC4JCWXGcGbwTldZgSp+BHWm4bwdp7rPGlCGA5Dtp6zPCe/Izm7dtG4c208XuXpOL/W7a6x6e9zpHx/NeV9ln9oXtsZR9Y0fsl J2xPhM2QP7yX5Fv6PH6Cl6nkWXovnODvsH0Z+/eYupxw= := ACMnicbVDLShxBFK3WPEwbEx+7ZFM4CK6GbhFMAgExmyxNyKgw3cjt6tjYV1U3U 7cSj6D9zql+Rn4k7c5iNS81gkYw5cOJxzLvdyikZJR0nyK1pafvL02fOVF/Hqy7VXr9c3Nk9c3VqBA1 Gr2p4V4FBJgwOSpPCsQi6UHhaXH6a+Kf0TpZm280bjDXMDKykgIoSF8/fDxf7yX9ZAr+mKRz0mNzH J9vRG+yshatRkNCgXPDNGko92BJCoVdnLUOGxCXMJhoAY0utxPX+34TlBKXtU2jCE+Vf/e8KCdG+si JDXQhVv0JuJ/vUIvXKbqXe6laVpCI2aHq1ZxqvmkB15Ki4LUOBAQVobfubgAC4JCW3GcGfwhaq3BlD4 DO9Jw1Q3T3GetKUMAyfSzmeEV+RnNu+6Lg5tpovdPSYne/10v/+y37v8Gje6wp7y7bZLkvZATtkn9 kxGzDBKnbNbtht9DO6i+6jh1l0KZrvbLF/EP3+Ayt3qh8= LIMG REC ACaHi cbVDbahRBEO0dL4njbaMPor4MWQSflhkJUd9CgqigEMVNAjvjUtNTs2nS3TN015gsTf+JX+Or/oC/4FfYe0F0Y0E1p86poqpP2UphKU1/9qIrV69d39i8E d+8dfvO3f7WvSPbdIbjiDeyMSclWJRC4gESTxpDYIqJR6XZwdz/fgLGisa/YlmLRYKplrUgMFatLfzVHbzqACOnX5/OUg3Ts/WRa2dh9fHXjvP/+p37 5/7f2kP0iH6SKSyBbgQFbxeFkq/corxreKdTEJVg7ztKWCgeGBJfo47yz2AI/gymOA9Sg0BZu8UGfPAlMldSNCakpWbB/TzhQ1s5UGToXV65rc/K/WqnW NlP9onBCtx2h5svFdScTapK5e0klDHKSswCAGxFuT/gpGOAUPI7jXOM5b5QCXbkczFTBhR9nhcs7XYUGJDfIvMsJL8gt5SR4Gwc3s3XvLoOjZ8NsZ/jyw 85gb3/l6yZ7zLbZU5ax52yPvWGHbMQ4+8q+se/sR+9X1I8eRA+XrVFvNXOf/RPR9m8qEr4B ResNet( ACPnicbVBNSxBEO3RmOjEJBpv5tJkETwtM0GI3kQvOQUjWR V2RqnpqV0bu3uG7pro0sz/8Gp+iX/DP+At5JqjvR+HZM2Dgsd7r6jiFbWSjpLkIVpYfLH08tXySvx69c3bd2vr709c1ViBPVGpyp4V4FBJgz2SpPCstgi6UHhaXB2O/dMfaJ2szHca1ZhrGBo5kAIoSOcZ4Q35Y3Rfkdrti7VO0k0m4M9JOiMdNsPRxXq0mZWVaDQaEgqc6 dJTbkHS1IobOscViDuIh9gM1oNHlfvJ2y7eCUvJBZcMY4hP17w0P2rmRLkJSA126eW8s/tcr9NxlGuzmXpq6ITRienjQKE4VH3fCS2lRkBoFAsLK8DsXl2BUGgujOD16LSGkzpM7BDTdtP8191pgyBJB8J239tMqpzdu2jUOb6Xx3z8nJp260937tPZP5j1usw+s I9sm6XsM9tnX9gR6zHBLtld+xndB89Rr+i39PoQjTb2WD/IPrzBEHvr7I= ) ACMXicbVBNSxBEO0xMerExK9bcmyCOayzMhC9CZ68aiQVW FnkJqe2rWxu2forokuzfwCr8kvya/xJl7zJ9L7cTBrHhQ83ntFa+olXSUJI/R0pu3y+9WVtfi9+sfPm5sbm1fuKqxAvuiUpW9KsChkgb7JEnhVW0RdKHwsrg9mfiXP9A6WZnvNK4x1zAycigFUJDOv15vdpJuMgV/TdI56bA5zq63ok9ZWYlGoyGhwLlBmtSUe7AkhcI2zh qHNYhbGOEgUAMaXe6n7Z8NyglH1Y2jCE+V9ueNDOjXURkhroxi16E/G/XqEXLtPwIPfS1A2hEbPDw0ZxqvikBl5Ki4LUOBAQVobfubgBC4JCWXGcGbwTldZgSp+BHWm4bwdp7rPGlCGA5Dtp6zPCe/Izm7dtG4c208XuXpOL/W7a6x6e9zpHx/NeV9ln9oXtsZR9Y0fsl J2xPhM2QP7yX5Fv6PH6Cl6nkWXovnODvsH0Z+/eYupxw= max(0, \u2318\u2212MSE( ACTnicbVBNaxRBEO1Zv9bxIxu9qYfGRdiDLjMSUG9BEbwIEd0ksDMsNT01mybdPUN3TZKlmYu/xqv+Eq/+EW+ivR8H3fig4dV7V VT1KxolHSXJj6h35eq16zf6N+Nbt+/c3Rns3jt0dWsFTkStantcgEMlDU5IksLjxiLoQuFRcfpm6R+doXWyNp9o0WCuYW5kJQVQkGa DR5mGi1HylGdIwJ/xUNKJq/z7j2+70WwTMbJCvwySTdkyDY4mO1GD7KyFq1GQ0KBc9M0aSj3YEkKhV2ctQ4bEKcwx2mgBjS63K+0 fEnQSl5VdvwDPGV+veEB+3cQhehc3XltrcU/+sVemszVS9zL03TEhqxXly1ilPNlxnxUloUpBaBgLAy3M7FCVgQFJKM48zguai1Bl P6DOw8JNhN09xnrSlDA5Ifp3PC/Ir23edV0c0ky3s7tMDp+P073xqw97w/3Xm1z7CF7zEYsZS/YPnvHDtiECfaZfWFf2bfoe/Qz +hX9Xrf2os3MfYPev0/ntSzCw= MSE( ACPXicbVBNSxBEO0xJprJh5rc4qXJEvC0zATB5CYRIZeAIV kVdgap6alZG7t7xu4a49LM7/Bqfkl+R35AbiFXr/Z+HTNg4LHe1XU4xWNko6S5He09Gj58ZOV1afxs+cvXq6tb7w6dHVrBQ5ErWp7XIBDJQ0OSJLC48Yi6ELhUXG2N/GPLtA6WZvNG4w1zAyspICKEh5poFOXeW/fNvtk7We0k/mYI/JOmc9NgcBycb0ZusrEWr0ZBQ4N wTRrKPViSQmEXZ63DBsQZjHAYqAGNLvfT1B1/F5SV7UNY4hP1bsXHrRzY12EzWnKRW8i/tcr9MJnqj7kXpqmJTRi9rhqFaeaTyrhpbQoSI0DAWFlyM7FKVgQFIqL48zgD1FrDab0GdiRhstumOY+a0ZFpB8L+18RnhJfmbzruvi0Ga62N1Dcvi+n273P37d7u1+mve6y jbZW7bFUrbDdtlndsAGTLBzdsWu2c/oV/Qn+hv9m60uRfOb1+weoptb3MGu+g= ResNet( ACPnicbVBNSxBEO3RmOjEJBpv5tJkETwtM0GI3kQvOQUjWR V2RqnpqV0bu3uG7pro0sz/8Gp+iX/DP+At5JqjvR+HZM2Dgsd7r6jiFbWSjpLkIVpYfLH08tXySvx69c3bd2vr709c1ViBPVGpyp4V4FBJgz2SpPCstgi6UHhaXB2O/dMfaJ2szHca1ZhrGBo5kAIoSOcZ4Q35Y3Rfkdrti7VO0k0m4M9JOiMdNsPRxXq0mZWVaDQaEgqc6 dJTbkHS1IobOscViDuIh9gM1oNHlfvJ2y7eCUvJBZcMY4hP17w0P2rmRLkJSA126eW8s/tcr9NxlGuzmXpq6ITRienjQKE4VH3fCS2lRkBoFAsLK8DsXl2BUGgujOD16LSGkzpM7BDTdtP8191pgyBJB8J239tMqpzdu2jUOb6Xx3z8nJp260937tPZP5j1usw+s I9sm6XsM9tnX9gR6zHBLtld+xndB89Rr+i39PoQjTb2WD/IPrzBEHvr7I= ) ACMXicbVBNSxBEO0xMerExK9bcmyCOayzMhC9CZ68aiQVW FnkJqe2rWxu2forokuzfwCr8kvya/xJl7zJ9L7cTBrHhQ83ntFa+olXSUJI/R0pu3y+9WVtfi9+sfPm5sbm1fuKqxAvuiUpW9KsChkgb7JEnhVW0RdKHwsrg9mfiXP9A6WZnvNK4x1zAycigFUJDOv15vdpJuMgV/TdI56bA5zq63ok9ZWYlGoyGhwLlBmtSUe7AkhcI2zh qHNYhbGOEgUAMaXe6n7Z8NyglH1Y2jCE+V9ueNDOjXURkhroxi16E/G/XqEXLtPwIPfS1A2hEbPDw0ZxqvikBl5Ki4LUOBAQVobfubgBC4JCWXGcGbwTldZgSp+BHWm4bwdp7rPGlCGA5Dtp6zPCe/Izm7dtG4c208XuXpOL/W7a6x6e9zpHx/NeV9ln9oXtsZR9Y0fsl J2xPhM2QP7yX5Fv6PH6Cl6nkWXovnODvsH0Z+/eYupxw= ) ACMXicbVBNSxBEO0xMerExK9bcmyCOayzMhC9CZ68aiQVW FnkJqe2rWxu2forokuzfwCr8kvya/xJl7zJ9L7cTBrHhQ83ntFa+olXSUJI/R0pu3y+9WVtfi9+sfPm5sbm1fuKqxAvuiUpW9KsChkgb7JEnhVW0RdKHwsrg9mfiXP9A6WZnvNK4x1zAycigFUJDOv15vdpJuMgV/TdI56bA5zq63ok9ZWYlGoyGhwLlBmtSUe7AkhcI2zh qHNYhbGOEgUAMaXe6n7Z8NyglH1Y2jCE+V9ueNDOjXURkhroxi16E/G/XqEXLtPwIPfS1A2hEbPDw0ZxqvikBl5Ki4LUOBAQVobfubgBC4JCWXGcGbwTldZgSp+BHWm4bwdp7rPGlCGA5Dtp6zPCe/Izm7dtG4c208XuXpOL/W7a6x6e9zpHx/NeV9ln9oXtsZR9Y0fsl J2xPhM2QP7yX5Fv6PH6Cl6nkWXovnODvsH0Z+/eYupxw= ) ACMXicbVBNSxBEO0xMerExK9bcmyCOayzMhC9CZ68aiQVW FnkJqe2rWxu2forokuzfwCr8kvya/xJl7zJ9L7cTBrHhQ83ntFa+olXSUJI/R0pu3y+9WVtfi9+sfPm5sbm1fuKqxAvuiUpW9KsChkgb7JEnhVW0RdKHwsrg9mfiXP9A6WZnvNK4x1zAycigFUJDOv15vdpJuMgV/TdI56bA5zq63ok9ZWYlGoyGhwLlBmtSUe7AkhcI2zh qHNYhbGOEgUAMaXe6n7Z8NyglH1Y2jCE+V9ueNDOjXURkhroxi16E/G/XqEXLtPwIPfS1A2hEbPDw0ZxqvikBl5Ki4LUOBAQVobfubgBC4JCWXGcGbwTldZgSp+BHWm4bwdp7rPGlCGA5Dtp6zPCe/Izm7dtG4c208XuXpOL/W7a6x6e9zpHx/NeV9ln9oXtsZR9Y0fsl J2xPhM2QP7yX5Fv6PH6Cl6nkWXovnODvsH0Z+/eYupxw= ResNet features Imagination embedding vi ACPHicbVBNSxBEO0x8WuiRpNbvDRZBE/LjAgmNzGXHA1kVdgZlpqe mrWxu2forjEuzfwNr+aX5H/knlvINef0fhx0zYOCx3tV1OMVjZKOkuRntPLi5era+sZm/Gpr e+f17t6bC1e3VuBA1Kq2VwU4VNLgCQpvGosgi4UXhY3n6b+5S1aJ2vzlSYN5hrGRlZSAUp yzTQdVH524kR7u9pJ/MwJ+TdEF6bIHz0V70Litr0Wo0JBQ4N0yThnIPlqRQ2MVZ67ABcQNj HAZqQKPL/Sx0xw+CUvKqtmEM8Zn6+MKDdm6i7A5DemWvan4X6/QS5+p+pB7aZqW0Ij546pV nGo+bYSX0qIgNQkEhJUhOxfXYEFQ6C2OM4PfRK01mNJnYMca7rphmvusNWVYQPK9tPMZ4R35 uc27rotDm+lyd8/JxVE/Pe5/HLcOz1b9LrB9tl7dshSdsJO2Wd2zgZMsIbdswf2PfoR/Yp+ R3/mqyvR4uYte4Lo7z/s5a8Q Encoder Decoder \u02dc vi ACRnicbVDLahRBFL098RHb1yS60k3hILgauiWg7kLcu IzgJIHpZqiuvj0pUlXdVN2OGYv6l2z1S/wFf8KduLXmsdCJBy4czrmXezhVp6SjLPuRDHZu3b5zd/dev/Bw0ePh3v7J67trcCJaFVrzyruUEmDE5Kk8KyzyHWl8LS6eL/0Ty/ROtmaT7TosNR8bmQjBacozYZPC83pvGp8QVLV6C9DmMnZcJSNsxXYTZJvy Ag2OJ7tJc+KuhW9RkNCcemedZR6bklKRSGtOgdlxc8DlOIzVcoyv9Kn5gL6NSs6a1cQyxlfr3hefauYWu4uYyrNv2luJ/vUpvfabmbeml6XpCI9aPm14xatmyG1ZLi4LUIhIurIzZmTjnlguKDaZpYfCzaLXmpvYFt3PNr8I0L3RmzouIPlRHmKReEV+b bMQhrbzLe7u0lOXo/zg/G7jwejw6Nr7vwHF7AK8jhDRzCBziGCQj4AtfwFb4l35Ofya/k93p1kGxunsA/GMAf6cex+A= Recon. Input zi ACPHicbVBNSxBEO0x MdGJSfy4xUuTRfC0zATB5CbJxaOCq8LOsNT01KyN3T1Dd03ipm/kWvyS/I/cs9NvHpO78dB1zwoeLxXRT1e0SjpKEn+RCvPnq+eLm2Hr/aeP3m7ebW9rmrWytwIGpV28sCHCpcECSF42 FkEXCi+K6y9T/+IrWidrc0aTBnMNYyMrKYClGUa6Kqo/PduJEebvaSfzMCfknRBemyBk9FW9C4ra9FqNCQUODdMk4ZyD5akUNjFWeuwAXENYxwGakCjy/0sdMf3glLyqrZhDPGZ+vDCg3Zu ouwOQ3plr2p+F+v0EufqfqYe2maltCI+eOqVZxqPm2El9KiIDUJBISVITsXV2BUOgtjOD30StNZjSZ2DHGm6YZr7rDVlWEDyvbTzGeEN+bnNu6LQ5vpcndPyfmHfnrQ/3R60Dv6vOh1 je2y92yfpeyQHbFjdsIGTLCG/WA/2a/od/Q3uo3u5qsr0eJmhz1CdP8P9CWvFA= D\u2713 ACOnicbVBNaxRBEO2Jmo+JmsTc4qVxCX haZmQhegvRg8cIbhLcGZantrdJt09Q3dNkqWZf+HV/JL8Ea+5iVd/gL0fB93kQcHjvVdU8YpaSUdJ8jNae/L02frG5la8/fzFy53dvVdnrmqswL6oVGUvCnCopME+SVJ4UVsEXSg8Ly4/zvzK7ROVuYrTWvMNYyNHEkBFKRv n4Y+owkStMPdTtJN5uAPSbokHbE6XAvOsjKSjQaDQkFzg3SpKbcgyUpFLZx1jisQVzCGAeBGtDocj9/ueWHQSn5qLJhDPG5+u+GB+3cVBchqYEmbtWbiY96hV65TKP3uZembgiNWBweNYpTxWd98FJaFKSmgYCwMvzOxQsCAq txXFm8FpUWoMpfQZ2rOGmHaS5zxpThgCS76Rt6BvyC9s3rZtHNpMV7t7SM7edNe98OXuf4ZNnrJnvN3rC3LGVH7Jh9ZqeszwQz7Dv7wW6ju+g+hX9XkTXouXOPvsP0Z+/27GuBQ= E\u03c6 ACOHicbVBNSxBEO0xMdGJSTe4qXJInhaZoJg cpNIEeFrC7sDEtNT+1uY3fP0F2TuDTzJ3I1v8R/4i038eovsPfjkKx5UPB47xVvKJW0lGS3EZrz56v3i5sRm/2nr95u32zrtzVzVWYE9UqrL9AhwqabBHkhT2a4ugC4UXxeXJzL/4gdbJynynaY25hrGRIymAgtT/OvRZPZHtcLuTdJM5+F OSLkmHLXE63IneZ2UlGo2GhALnBmlSU+7BkhQK2zhrHNYgLmGMg0ANaHS5nz/c8v2glHxU2TCG+Fz9e8ODdm6qi5DUQBO36s3E/3qFXrlMo0+5l6ZuCI1YHB41ilPFZ23wUloUpKaBgLAy/M7FBCwICp3FcWbwp6i0BlP6DOxYw1U7SHOfNaYM ASTfSVufEV6RX9i8bds4tJmudveUnH/spofdz2eHneMvy1432B7wA5Yyo7YMfvGTlmPCabYL3bNfkc30Z/oLrpfRNei5c4u+wfRwyMpq0d LIMG REC ACaHicbVDbahRBEO0dL4njbaMPor4MWQSflhkJUd9CgqigEMVNAjvjUtNTs2nS 3TN015gsTf+JX+Or/oC/4FfYe0F0Y0E1p86poqpP2UphKU1/9qIrV69d39i8Ed+8dfvO3f7WvSPbdIbjiDeyMSclWJRC4gESTxpDYIqJR6XZwdz/fgLGisa/YlmLRYKplrUgMFatLfzVHbzqACOnX5/OUg3Ts/WRa2dh9fHXjvP/+p375/7f2kP0iH6SKSyBbgQFbxeFkq/corxreKdTEJVg7ztKWCgeGBJf o47yz2AI/gymOA9Sg0BZu8UGfPAlMldSNCakpWbB/TzhQ1s5UGToXV65rc/K/WqnWNlP9onBCtx2h5svFdScTapK5e0klDHKSswCAGxFuT/gpGOAUPI7jXOM5b5QCXbkczFTBhR9nhcs7XYUGJDfIvMsJL8gt5SR4Gwc3s3XvLoOjZ8NsZ/jyw85gb3/l6yZ7zLbZU5ax52yPvWGHbMQ4+8q+se/sR+9X1I8eRA +XrVFvNXOf/RPR9m8qEr4B (a) Imagination module (b) Oracle model (c) Guesser model Figure 2: Imagination-based Representation Learning: Given the perceptual information vi of object oi, we learn an imagination embedding zi generated by Encoder E\u03c6. The latent code is optimized to reconstruct the original visual representation vi (the \u201cdonut\u201d ResNet encoding) via the reconstruction loss LIMG REC using the Decoder D\u03b8. Figures 2(b) and 2(c) show how the imagination embedding z replaces the category embedding c in the Oracle model from De Vries et al. (2017) and Guesser model from Shekhar et al. (2019) respectively, and is concatenated to the spatial information si. sampling (e.g., an isotropic Gaussian prior). VAEs which need to have such a \ufb01xed prior, instead, are deemed to learn embeddings that are less informative w.r.t. objects, categories, and context information. Module architecture. Figure 2(a) summarizes our imagination module. Its aim is to distill a context and category-aware embedding zi \u2208RdZ per object oi in scene S. To this end, we adopt an encoder E\u03c6 parameterized by \u03c6 that maps a perceptual embedding vi of object oi to its imagined counterpart zi, i.e., E\u03c6(vi) = zi. A decoder D\u03b8 realizes the inverse mapping \u02dc vi = D\u03b8(zi), with \u02dc vi \u2208RdO being also called the reconstruction of the input vi. As in RAEs, our per-object loss LIMG comprises a reconstruction loss (LREC), weighting how good the reconstructions of D\u03b8 are w.r.t. the encoded representations by E\u03c6, and a regularization term (LREG) enhancing generalization by smoothing the decoder D\u03b8. This leads to the following composite loss: LIMG = LREC + \u03b1LREG, (1) where \u03b1 is an hyperparameter controlling regularization.2 As in L2-RAE (Ghosh et al., 2019), the regularization component is de\ufb01ned as LREG := ||zi|| + ||\u03b8||2: the \ufb01rst term bounds the latent embedding space learned by E\u03c6 easing optimization; the second enforces smoothing over D\u03b8 improving generalization over regions of the latent space that are unseen during training. Differently from RAEs, we devise a speci\ufb01c reconstruction loss tailored to learn contextual and category-aware representations. In conventional RAEs, in fact, the reconstruction loss is de\ufb01ned as the Mean Squared Error (MSE) representing the distance between vi and its reconstruction \u02dc vi, so that LRAE REC := MSE(vi, \u02dc vi). This loss is purely unsupervised and as such agnostic to object categories or to the scene context. To our aims, we de\ufb01ne a custom imagination reconstruction loss LIMG REC as an instance of a max-margin triplet-loss (Wang et al., 2014; Schroff et al., 2015), as follows. Let ci be the category 2Ghosh et al. (2019) use two different hyperparameters for the two terms in LREG Optimizing them independently had no evident bene\ufb01t in our experiments, hence we simply treat them as a single regularizer together. \fof object oi with perceptual embedding vi in scene S and let O\u00acci = {oj | oj \u2208O \u2227cj \u0338= ci} be the set of all objects in S belonging to a different category than ci. Our per-object LIMG REC term is de\ufb01ned as: LIMG REC := max(0, \u03b7 \u2212MSE(vi, D\u03b8(zi)) + MSE(vj, D\u03b8(zi))), (2) where \u03b7 is the minimum margin between two components: i) the distance between the perceptual embedding vi and its reconstruction D\u03b8(zi), and ii) the distance between the perceptual embedding vj of a randomly sampled object oj \u2208O\u00acci and the reconstruction D\u03b8(zi). By doing so, we enforce each object representation to be representative of its category given a speci\ufb01c context by locally contrasting it to another object of a different category in the same scene. Note that this is strikingly different from previous approaches employing a max-margin loss (Elliott and K\u00b4 ad\u00b4 ar, 2017; Kiros et al., 2018) where \u201cnegative\u201d objects are arbitrarily sampled from other scenes in the same batch. Imagining at inference time. Differently from the category embeddings c employed by all previous work, our imagination embeddings z do not depend on gold category labels at inference time, while still being context-aware and category-aware. In fact, once parameters \u03c6 have been learned, the encoder E\u03c6 contains all the information needed to distill embeddings z independently of LIMG, which is necessary only at training time. We consider imagination the ability of the model of generating latent representations on-the-\ufb02y. Therefore, for both Guesser and Oracle models we consider an object representation for object oi that replaces ci with zi and concatenates it with its spatial information si (see Figures 2(b) and 2(c) and Appendix A.1 for details). By doing so, we consider every gameplay situated in a reference scene as an experience where our imagination module is able to derive a latent conceptual representation simply by \u201clooking\u201d at objects, realizing a perceptual simulator (Barsalou, 2008). We plan to investigate how to combine label-dependent category embeddings c with our imagination embeddings z, similarly to how some VAE variants tackle semi-supervised classi\ufb01cation scenarios (Kingma et al., 2014). 4 Experimental Investigation To assess the impact of using the imagination embeddings against the category embeddings, we use two evaluation benchmarks: GuessWhat?! and CompGuessWhat?!. More information about the training procedure can be found in Appendix A.2. 4.1 GuessWhat?! Evaluation In this experiment, we evaluate the accuracy of the Oracle in answering questions and the accuracy of the Guesser in selecting the target object. We consider as both training and evaluation data all the gold dialogues (and questions) that have been labeled as successful in the dataset (De Vries et al., 2017). We want to highlight that in this evaluation phase, the models using label-aware object encodings have gold information both at training and test time. This is true both for the Oracle and Guesser models. However, this does not hold for all other models using the imagination component. 4.1.1 Experimental Setup Oracle task. We evaluate the imagination-based Oracle and compare it to several combinations of the following baselines with and without category embeddings from De Vries et al. (2017): 1) MAJORITY: majority classi\ufb01er; 2) QUESTION: uses only the question; 3) IMAGE: uses only the image representation; 4) CROP: uses only the crop representation of the target object. Guesser task. Similarly, we compare the GDSE model using imagination embeddings (GDSE+IMAGINATION) with the following label-aware baselines: 1) text-only baselines using LSTM encoder (LSTM) and Hierarchical Recurrent Encoder-Decoder architecture (Serban et al., 2017) (HRED) as well as their corresponding multi-modal models LSTM+IMAGE and HRED+IMAGE; 2) PARALLELATTENTION (Zhuang et al., 2018) and GDSE (Shekhar et al., 2019). We also compare with variants of the above that do not use any category embeddings or gold category labels (*-NOCAT), as well as models with predicted category labels (*-PREDCAT).3 3We train an object classi\ufb01er using as input the ResNet-101 features generated for the object crop. It achieves 65% accuracy evaluated on all objects in the GuessWhat?! test set. \fPERCEPTUAL INFORMATION CATEGORICAL INFORMATION MODEL (DV-QUES+SPATIAL) LOCATION SHAPE COLOR TEXTURE SIZE SUPER OBJECT CATEGORY + CROP 66.86% 69.08% 67.25% 68.30% 65.09% 88.94% 80.48% + CATEGORY 67.48% 68.42% 61.83% 70.08% 60.14% 97.09% 88.82% + CATEGORY + CROP 65.27% 60.34% 59.14% 65.76% 59.08% 96.19% 86.32% + IMAGINATION 68.62% 69.08% 67.64% 69.86% 62.65% 90.05% 82.32% Table 2: Oracle accuracy grouped by question type for the best Oracle model with category information (DV-QUES+SPATIAL) and for multi-modal variants using either perceptual or categorical information. 4.1.2 Results MODEL VAL TEST BASE MAJORITY 53.80% 49.10% QUES 58.30% 58.80% IMG 53.30% 53.30% CROP 57.30% 57.00% W/ CAT DV-QUES+CAT 74.20% 74.30% DV-QUES+CROP+CAT 75.60% 75.30% DV-QUES+SPATIAL+CAT 78.90% 78.50% DV-QUES+SPATIAL+CROP+CAT 78.30% 77.90% DV-QUES+SPATIAL+IMG+CAT 76.80% 76.50% MM DV-QUES+CROP 70.90% 70.80% DV-QUES+IMG 59.80% 60.20% DV-QUES+SPATIAL 68.80% 68.70% DV-QUES+SPATIAL+CROP 74.00% 73.80% DV-QUES+SPATIAL+CROP+IMG 72.30% 72.10% IMAGINATION 75.78% 75.88% Table 1: Oracle results on gold questions: we compare the IMAGINATION Oracle model to models from De Vries et al. (2017) (DV-*). We group them into models relying on gold category labels (W/ CAT) and models that only use multi-modal perceptual information (MM). Oracle task. In Table 1, we divide con\ufb01gurations into category-aware (De Vries et al., 2017) and multi-modal. The model reference for several other publications on GuessWhat?! is a category-aware model QUESTION+SPATIAL+CATEGORY. However, by relying on symbolic information in the form of category labels, it is inevitably not truly multimodal anymore because the heavy-lifting is done by these embeddings. As shown in the results, other multi-modal models such as QUESTION+SPATIAL+CROP and QUESTION+CROP, are not able to learn effective representations to bridge the gap between category-aware and category-free models. On the other hand, the proposed imagination model is able to reduce this gap without relying on gold information as input. Indeed, we are able to learn categoryaware and context-aware latent codes by using category information only in our loss function. We investigate this argument further by using a rule-based question classi\ufb01er (Shekhar et al., 2019) to partition the test questions according to their type. Table 2 summarizes this analysis; we include models considered truly multi-modal and the best Oracle model QUESTION+SPATIAL+CATEGORY. The latter can answer with high accuracy questions about speci\ufb01c object instances (e.g., \u201cis it the dog?\u201d) or super-categories (e.g., \u201cis it an animal?\u201d) since it is using category embeddings as input. However, when it comes to answering questions about perceptual properties of the target object, it loses some accuracy points because the perceptual information is missing from the category embedding representing a centroid of typical instances seen at training time only. On the other hand, the IMAGINATION model is able to bring improvements of 1.34%, 5.81%, and 2.52% for location, color, and shape questions, respectively. On questions related to perceptual information, models using crop information seem to be on par with the IMAGINATION model. However, our model is able to obtain an improvement over +CROP of 1.84% in object questions and of 1.11% on super category questions solely by relying on the imagination embeddings. Guesser task. Table 3 compares several category-aware and multi-modal models; PARALLELATTENTION and GDSE-SL are the two best performing con\ufb01gurations. However, when PARALLELATTENTION does not have access to category information (PARALLELATTENTION-NOCAT) its performance drops by 3.7% (also noted by Zhuang et al. (2018)). We con\ufb01rmed the same behavior for GDSE-SL as well (GDSE-SL-NOCAT), noticing a more signi\ufb01cant drop in performance of 16.95% which is in line with the simpler LSTM+IMAGE model. On the other hand, GDSE-SL with our imagination component (GDSE-SL+IMAGINATION), performs comparably with the category-aware model and better then all \fGameplay Attribute Prediction Zero-shot Gameplay ACCURACY A-F1 S-F1 AS-F1 L-F1 ND-ACC OD-ACC GROLLA RANDOM 15.81% 15.1 0.1 7.8 2.8 16.9% 18.6% 13.3 DEVRIES-SL 41.5% 46.8 39.1 48.5 42.7 31.3% 28.4% 38.5 DEVRIES-RL 53.5% 45.2 38.9 47.2 42.5 43.9% 38.7% 46.2 GDSE-SL 49.1% 59.9 47.6 60.1 48.3 29.8% 22.3% 43.0 GDSE-CL 59.8% 59.5 47.6 59.8 48.1 43.4% 29.8% 50.1 GDSE-SL+IMAGINATION 43.82% 56.23 47.37 57.2 51.73 39.19% 39.90% 45.50 GDSE-CL+IMAGINATION 51.98% 57.59 47.6 58.31 50.42 46.56% 46.96% 50.74 Table 4: Results for the CompGuessWhat?! benchmark (Suglia et al., 2020). We assess model quality in terms of gameplay accuracy, attribute prediction quality, measured in terms of F1 for the abstract (A-F1), situated (S-F1), abstract+situated (AS-F1) and location (L-F1) prediction scenario, as well as zero-shot learning gameplay. GROLLA is a macro-average of the individual scores. multi-modal models. Therefore we argue that it is possible to learn object representations that, given a representation for the current dialogue state, allow for discriminating the target object among other candidates without relying on symbolic information. 4.2 CompGuessWhat?! Evaluation MODEL VAL TEST HUMAN 90.80% 90.80% RANDOM 17.10% 17.10% CATEGORY LSTM 62.10% 61.30% HRED 61.80% 61.00% LSTM+IMAGE 61.50% 60.50% HRED+IMAGE 61.60% 60.40% PARALLELATTENTION 63.80% 63.40% GDSE-SL 63.14% 62.96% GDSE-SL-PREDCAT 52.08% 51.00% MM LSTM+IMAGE-NOCAT 50.10% 48.60% PARALLELATTENTION-NOCAT 55.70% 59.70% GDSE-SL-NOCAT 46.11% 46.01% GDSE-SL-IMAGINATION 59.54% 58.90% Table 3: Guesser accuracy on successful gold dialogues: we compare GDSE-SL-IMAGINATION with i) models that are truly multi-modal (MM) and ii) use category information (CATEGORY). CompGuessWhat?! is a benchmark proposed to assess the quality of models\u2019 representations and out-of-domain generalization. It includes the following tasks: a) in-domain gameplay accuracy, \u2013 selecting the target object with model generated dialogues as input, b) attribute prediction task \u2013 assessing the ability of the dialogue representation to recover target object attributes, and c) zero-shot gameplay accuracy \u2013 selecting the target object among objects belonging to categories never seen by the model during training. In contrast to GuessWhat?!, the attribute prediction and zero-shot tasks give us more insights about the quality of the learned representations and the model\u2019s generalization ability. 4.2.1 Experimental Setup We compare imagination-based models with baselines used in Suglia et al. (2020): 1) RANDOM: randomly selects an object; 2) DEVRIES-SL: presented in De Vries et al. (2017) trained using Supervised Learning; 3) DEVRIES-RL: DEVRIES-SL with Questioner \ufb01ne-tuned using Reinforcement Learning (Strub et al., 2017); and where 4) GDSE-SL and 5) GDSE-CL are the same as used in Section 4.1. 4.2.2 Results In-domain gameplay. Table 4 presents the results on the CompGuessWhat?! benchmark. Models are tasked to play the game by generating up to 10 questions and corresponding answers. Firstly, we note that the results for GDSE-CL+IMAGINATION\u2014the collaborative version of the model with Imagination\u2014 is still in the same ballpark of more complex models, such as DEVRIES-RL that is using category embeddings as input. At the same time, we notice that overall both imagination models perform worse than the GDSE-* models. We impute this drop to the introduction of additional loss terms that probably have changed the training dynamic of a cumbersome modulo-n multi-task training (Shekhar et al., 2019). This downside calls for a more principled way of handling tasks of different complexity (i.e., question generation and target prediction) in a multi-task learning system; we leave this for future work. \fAttribute prediction. Table 4 reports the attribute prediction task results. In this scenario, we underline the fact that the dialogue state representation generated by the Guesser model is used to recover several types of attributes associated with the target object. In this work, we use the same dialogue state representation as used by Shekhar et al. (2019) and only focus on improving the object representations using the imagination component. Indeed, the best imagination model GDSE-SL+IMAGINATION is in line with GDSE-SL, currently the best model in terms of attribute prediction. In particular, even though the dialogue state representation is only indirectly affected by the imagination embeddings (via a dotproduct operation to score the candidate objects), we can still see an improvement in terms of F1 for Location attributes (L-F1) and similar performance for Situated attribute prediction (S-F1). Both can be considered, to some extent, a result of better situated object representations. Zero-shot gameplay. As underlined in Section 3, the imagination module\u2019s main strength is to be able to distill imagination embeddings from perceptual information only, without relying on externally provided category labels. The zero-shot gameplay scenario from CompGuessWhat?! (Table 4) sheds some light on the ability of the model to generalize to out-of-distribution examples. In the out-of-domain gameplay scenario where candidate objects belonging to categories never seen before are present, both imagination-based models GDSE-SL+IMAGINATION and GDSE-CL+IMAGINATION outperform the previous best performing system DEVRIES-RL by 1.2% and 8.26%, respectively in terms of OD accuracy (OD-ACC). By analyzing their output, we notice that the best imagination model achieves higher accuracy by learning a better gameplay strategy involving half the amount of location questions generated by DEVRIES-RL (39.68% vs 75.84%; see Appendix A.3 for more details). A further improvement in the near-domain scenario (ND-ACC) con\ufb01rms the effectiveness of the imagination component to generate category embeddings for objects on-the-\ufb02y using only perceptual information. Out-of-domain error analysis. Lastly, we report an error analysis comprising 50 dialogues selected at random from out-of-domain games (for more details refer to Appendix A.3). First, we manually annotated the Oracle answers and partitioned them according to their type using the same question classi\ufb01er used for the Oracle Task (Section 4.1.2). 83% of super-category questions (from a total of 80) were correctly answered by the model and 63.36% color related questions (from a total of 88) were correctly answered. For instance, as shown in Figure 3, GDSE-CL is not able to answer correctly the question \u201cis it a person?\u201d because it does not have category information for the label \u201cgirl\u201d but only for the label is it a person? yes is it the kid on the bike? yes is it the whole person? yes is it wearing an helmet? yes GDSE-CL+imagination is it a person? no is it a bike? no is it the green bag? no is the the bike? yes is it the bike in the back? no is the the bike wheel to the right? no is it the bike to the left of the bike? yes is it the whole bike? yes GDSE-CL is it an animal? yes is it the one in the middle? yes is it the whole animal? yes is it the one whose head is visible? yes is it the whole animal? yes is it the entire animal? yes is the head facing to the left side of the picture? yes is the head facing to the left? yes GDSE-CL+imagination Is it a gira\ufb00e? no Is it a car? no Is it the wood plank in the background no is the animal the animal sitting on? yes is the object the animal on the brown animal? no is it the black jacket with the brown handle? no is the animal's head visible? no is the object brown? no is the handle black? no GDSE-CL is it a vase? yes is it the one on the left? yes is the whole thing? yes is it on the left? yes is the tallest bottle? no is it between the two bottles and the pot? yes is it between the two bottles and the pot in the middle? no is it next to the blue cup? yes GDSE-CL+imagination is it a cup? no is it a bottle? no is it a knife? no is the object is in the background? no is the handle holding the item? no is the the glass thing near the corner of the photo? no is the the back part of it visible? no is it the furthest back? no is it the furthest right? no is it the left most corner? no GDSE-CL status: failed status: failed status: success status: failed status: success status: failed Figure 3: Qualitative examples in the zero-shot gameplay scenario: the categories \u2019girl\u2019 and \u2019antelope\u2019 are not present in MSCOCO and therefore cannot be encoded by the GDSE-CL model. On the other hand, the imagination model is able to distill imagination embeddings by using the crop features only (for the sake of presentation quality we remove consecutive repeated questions). \f\u201cperson\u201d. On the other hand, GDSE-CL+IMAGINATION is able to a) categorize the object as a member of the super-category \u201cperson\u201d, and b) correctly ground the expression \u201ckid on the bike\u201d to the target object. The same behavior can be observed when the \u201cantelope\u201d is the target object. Antelopes are not part of the MSCOCO classes, and therefore have not been seen by the model during training. First, the model refers to it as \u201canimal\u201d, hence the Oracle is able to correctly answer the question even though \u201cantelope\u201d was never involved in the training. Secondly, we found that the number of No answers for GDSE-CL is considerably higher (88.06%) than GDSE-CL+IMAGINATION (51.02%), validating our hypothesis that the Oracle does not know how to deal with unseen instances. Finally, in the imagination dialogue of the \ufb01rst example, even though the generated question/answers were probably referring to the correct object, the Guesser model is eventually unable to guess correctly. More work is required to better fuse the language modality and the object representations to improve its performance. 5 Related Work Concerning unsupervised learning of concept representations, Bruni et al. (2014) \ufb01rst learn modalityspeci\ufb01c representations and then fuse them into a uni\ufb01ed representation for each concept. However, they rely on hand-crafted bags of visual features, making the approach laborious to extend to new domains and games. Kiela et al. (2018) cope with this issue by relying on CNN models to extract latent features from images for instances of speci\ufb01c objects. Lazaridou et al. (2015) use a margin loss but in the context of maximizing the similarity between the visual representation of a noun phrase and its corresponding text representation. Similarly, Collell et al. (2017) learn a mapping between the ResNet features and the word embeddings of a concept. As discussed in Section 2, unlike our imagination embeddings, these purelyperceptual representations are neither category-aware nor context-aware. Silberer et al. (2016) present a multi-modal model that uses a denoising auto-encoder framework. Unlike us, they do not use perceptual information as input but rely on an attribute-based representation derived from an additional attribute predictor. However, they do use a reconstruction loss (cross-entropy loss for attribute prediction) and an auxiliary category loss during training. Their training scheme is more complex as they \ufb01rst separately train the AE for each modality and then fuse them, which we avoid by adopting a single end-to-end architecture. Ebert and Pavlick (2019) used VAEs to learn grounded representations for lexical concepts. However, as discussed in Section 3, VAEs are not as well suited as RAEs to representation learning for our imagination module. In the context of guessing games, all the previous approaches rely on categories embeddings (De Vries et al., 2017; Shekhar et al., 2019; Strub et al., 2017; Zhuang et al., 2018; Shukla et al., 2019) (see Section 2). Our imagination component can be \ufb02exibly integrated in any of them by replacing the category embeddings with imagination embeddings. 6 Conclusions We argued that existing models for learning grounded conceptual representations fail to learn compositional and generalizable multi-modal representations, relying instead on the use of category labels for every object in the scene both at training and inference time (De Vries et al., 2017). To address this, we introduced a novel \u201cimagination\u201d module based on Regularized Auto-Encoders, that learns a context-aware and category-aware latent embedding for every object directly from its image crop, without using category labels. We showed state-of-the-art performance in the CompGuessWhat?! zeroshot scenario (Suglia et al., 2020), outperforming current models by 8.26% in gameplay accuracy while performing comparably on the other tasks to models which use category labels at training time. The imagination-based model also shows improvements of 2.08% and 12.86% in Oracle and Guesser accuracy. Finally, we conducted an extensive error analysis and showed that imagination embeddings help to reason about object visual properties and attributes. For future work, we plan to 1) integrate category labels at training time in a more principled way following advances in semi-supervised learning (Kingma et al., 2014); 2) improve the multi-task learning procedure presented in (Shekhar et al., 2019) to optimize at the same time multiple tasks of different complexities."
+ },
+ {
+ "url": "http://arxiv.org/abs/2006.02174v1",
+ "title": "CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language Learning",
+ "abstract": "Approaches to Grounded Language Learning typically focus on a single\ntask-based final performance measure that may not depend on desirable\nproperties of the learned hidden representations, such as their ability to\npredict salient attributes or to generalise to unseen situations. To remedy\nthis, we present GROLLA, an evaluation framework for Grounded Language Learning\nwith Attributes with three sub-tasks: 1) Goal-oriented evaluation; 2) Object\nattribute prediction evaluation; and 3) Zero-shot evaluation. We also propose a\nnew dataset CompGuessWhat?! as an instance of this framework for evaluating the\nquality of learned neural representations, in particular concerning attribute\ngrounding. To this end, we extend the original GuessWhat?! dataset by including\na semantic layer on top of the perceptual one. Specifically, we enrich the\nVisualGenome scene graphs associated with the GuessWhat?! images with abstract\nand situated attributes. By using diagnostic classifiers, we show that current\nmodels learn representations that are not expressive enough to encode object\nattributes (average F1 of 44.27). In addition, they do not learn strategies nor\nrepresentations that are robust enough to perform well when novel scenes or\nobjects are involved in gameplay (zero-shot best accuracy 50.06%).",
+ "authors": "Alessandro Suglia, Ioannis Konstas, Andrea Vanzo, Emanuele Bastianelli, Desmond Elliott, Stella Frank, Oliver Lemon",
+ "published": "2020-06-03",
+ "updated": "2020-06-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "main_content": "Introduction Several grounded language learning tasks have been proposed to capture perceptual aspects of language (Shekhar et al., 2017; Hudson and Manning, 2019; Suhr et al., 2019; Agrawal et al., 2018). However, the advances in this \ufb01eld have been primarily driven by the \ufb01nal performance measures and less on the grounding capability of the models. In fact, in some cases, high-performance models exploit dataset biases to achieve high scores on the \ufb01nal task (Zhang et al., 2016; Agrawal et al., 2016). In Turn Question Answer 1 is it an appliance? yes 2 does it blend? no 3 is it the oven? no 4 is it the microwave? yes left-of left-of microwave oven food mixer is appliance has buttons is white is appliance has buttons is white is appliance has buttons usedto_blend is white Legend abstract situated Figure 1: Every gameplay in the CompGuessWhat?! benchmark has a reference scene that is mapped to a scene graph composed of objects represented in terms of abstract and situated attributes. the literature, several methods have been proposed to analyse what kind of information is captured by neural network representations (Kdr et al., 2017; Belinkov and Glass, 2019). Most of these works examine the hidden state representations learned by models trained on only textual data. However, many aspects of human semantic representations are grounded in perceptual experience (Andrews et al., 2009; Riordan and Jones, 2011). This paper explores the idea that visually grounded representations ought to be a result of systematic composition of grounded representations (Harnad, 1990). For instance, the understanding of the word \u201cmicrowave\u201d is grounded in perception of objects with speci\ufb01c attributes such as shape, colour, and size \u2013 see Figure 1 for an example. Therefore, investigating whether the representations learned by a model exhibit forms of attribute composition is bene\ufb01cial for assessing model interpretability and generalisation. In this work, we propose GROLLA \u2013 a multitask evaluation framework for Grounded Language Learning with Attributes that expands a goalarXiv:2006.02174v1 [cs.CL] 3 Jun 2020 \ftype accessories has shaft has handle open black red center Umbrella has eyes has 2 legs has 2 arms has mouth little girl Person Figure 2: CompGuessWhat?!: Detailed description of the attributes of two different objects in the reference scene. Both the objects have a set of abstract attributes (indicated in blue) and a set of situated attributes (indicated in green). oriented evaluation \u2013 based on the standard \ufb01nal task measure, with two auxiliary tasks: 1) Object attribute prediction (AP), and 2) Zero-shot evaluation (ZS). The attribute prediction task is designed to evaluate the extent to which the model\u2019s latent representations associated with objects are useful for predicting their attributes. The prediction performance on this task can be related to a degree of compositionality of the learned representations. We adopt a behavioural, i.e., task-driven, approach to assessing aspects of compositionality for visually grounded representations, whereby the extent to which a representation is compositional depends on: (a) its ability to predict object attributes, and (b) its ability to generalise to novel contributions of object attributes. To support (b), we design a zeroshot evaluation that measures the extent to which the learned representations can be reused in a task involving objects unseen during training. By optimising for both the \ufb01nal end-goal measure as well as the auxiliary tasks, we aim to drive the design of models that can solve the task more reliably and whose representations are easier to interpret as a result of being a composition of visual attributes. This paper presents three main contributions: (1) We de\ufb01ne GROLLA \u2013 a multi-task evaluation framework for grounded language learning that augments the \ufb01nal end-goal measure(s) with auxiliary tasks aimed at assessing the degree of attribute grounding of the model\u2019s representations; (2) We propose an instance of this multi-task evaluation framework, namely CompGuessWhat?!; and (3) We evaluate state-of-the-art models using the CompGuessWhat?! dataset. The evaluation shows that models with high performance in the end-goal task are not able to reliably predict the attributes of given objects and do not generalise to examples with unseen object categories. CompGuessWhat?! is a benchmark of 65, 700 dialogues (see Section 3). It is based on GuessWhat?! (de Vries et al., 2017) dialogues and enhanced by including object attributes coming from resources such as VISA attributes (Silberer and Lapata, 2012), VisualGenome (Krishna et al., 2017) and ImSitu (Yatskar et al., 2016). 2 Evaluation Framework Our evaluation framework for Grounded Language Learning tasks is based on three different sub-tasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; 3) Zero-shot evaluation. Goal-oriented evaluation We evaluate the models according to the multi-modal task that they have to solve, which can generally be categorised as classi\ufb01cation or generation. Classi\ufb01cation tasks such as Visual Question Answering (Antol et al., 2015) or Visual Natural Language Inference (Suhr et al., 2019) involve predicting the correct label for a given example whose performance is measured in terms of predictive accuracy. In generative tasks, such as Image Captioning (Bernardi et al., 2016), the model has to learn to generate a sequence of labels for a given input data whose performance measure is BLEU (Papineni et al., 2002). Object attribute prediction evaluation We support the goal-oriented evaluation with the attribute prediction auxiliary task related to assessing the degree of compositionality of the representations learned for a speci\ufb01c task. With an attribute prediction task, we can assess whether the learned representations capture what we think they should, in terms of object attributes, rather than spurious correlations. The idea of using object attributes as an auxiliary task follows from the Characteristic Feature Hypothesis (Hampton, 1979) according to which every concept category has a set of de\ufb01ning features, which provide a criterion for judging which objects are category members, and which are not. Therefore, the higher the accuracy in the attribute prediction task, the more the representations learned by the model are composed of the set of attributes of the objects. Zero-shot Evaluation Via the attribute prediction task, we can assess the ability of latent representations to recover some of the attributes associated with their object category. Assuming that the model has learned to represent these attributes, we \fhypothesise that it should solve the original task even when objects that have never been seen during training are involved. In our evaluation framework, inspired by other multi-task evaluation frameworks (Wang et al., 2018; McCann et al., 2018; Wang et al., 2019; Shuster et al., 2019), we de\ufb01ne Grounded Language Learning with Attributes (GROLLA) as the \ufb01nal score assigned to the model. It is computed as macro-average of the metrics over all tasks. We de\ufb01ne the GROLLA score for convenience only and we underline the importance of having multiple scores for assessing different model abilities. In this work, we present CompGuessWhat?! as a dataset implementing this evaluation framework. Thanks to the high overlap between the image set of several datasets (Lu et al., 2019), future work will extend it to other grounded language learning tasks such as image captioning and visual navigation. 3 CompGuessWhat?! Benchmark 3.1 Task De\ufb01nition CompGuessWhat?! is an instance of our evaluation framework that is based on a guessing game (Steels, 2015), which can be viewed as a \ufb01rst step in a curriculum of language games for arti\ufb01cial agents. It involves two agents, a scene, and a target object: the Questioner asks questions in order to identify the target object in a scene, while the Oracle knows the target object and has to answer the questions. A multi-word guessing game requires two essential properties for grounded language learning: 1) the ability to generate discriminative questions aimed at narrowing down the search space (Natural Language Generation), and 2) the ability to understand the information provided so far during the game and exploit it to guess the target object (Natural Language Understanding). 3.2 Image Annotations Design CompGuessWhat?! extends the GuessWhat?! dataset (de Vries et al., 2017) to promote the study of attribute-grounded language representations. The original GuessWhat?! dataset is extended with a semantic layer on top of the perceptual layer (i.e., images). This layer consists of a collection of intentional and extensional attributes of the objects in the reference image (Figure 2). We enrich the VisualGenome (Krishna et al., 2017) scene graphs associated with the GuessWhat?! images with several attributes coming from resources such as VISA (Silberer and Lapata, 2012) and ImSitu (Yatskar et al., 2016). Unfortunately, not all the GuessWhat?! images are included in VisualGenome. We were able to reuse 40.79% of the original GuessWhat?! dialogues for a total of 65, 700 dialogues (additional information can be found in the related Appendix A.1). By relying on this set of attributes, we de\ufb01ne an attribute prediction evaluation to assess the extent to which the learned neural representations can encode the attributes speci\ufb01ed during the dialogue. In order to determine the generalisation power of the learned representations and their ability to be transferred, we propose a novel zero-shot learning set of reference games involving target object belonging to an unseen object category. The dataset and the code associated with this paper can be found online1. Psycholinguistically-motivated attributes We extend the set of attributes for every object category in MSCOCO with psycholinguistically-motivated semantic representations based on the McRae Norms (McRae et al., 2005) developed by Silberer and Lapata (2012). We use only the subset of so-called abstract attributes, and ignore attributes from the original set that can change depending on the reference image (e.g., \u201cshape\u201d, \u201ctexture\u201d, etc.). We use the WordNet synset identi\ufb01er (e.g., \u201cperson.n.01\u201d) associated with a given MSCOCO category (e.g., \u201cperson\u201d) to automatically associate its corresponding abstract attributes with a speci\ufb01c object instance. However, very often several VisualGenome objects have a synset associated with a class that is a hyponym of the MSCOCO category synset. Therefore, we rely on the Wu-Palmer similarity (Wu and Palmer, 1994) to \ufb01nd the best match between the VisualGenome synset and the MSCOCO category synset (with a similarity threshold of 0.75 chosen by using as reference the distance between the synset of person and woman). The intuition behind this heuristic is that we assume that a hyponym will inherit the abstract attributes of its hypernym. Affordances & Behaviours We extract the semantic roles associated to speci\ufb01c object categories using the ImSitu dataset (Yatskar et al., 2016), in order to include affordances and behaviours associated with every object category. An object category is associated with a behaviour every time it appears as the agent of a given predicate. For in1https://compguesswhat.github.io \fstance, \u201cthe food mixer [agent] blends fruit\u201d, where the behaviour is the food mixer\u2019s ability to blend something. We also consider affordances associated with a given category and divide them into two categories: 1) can be, every predicate having the object category as item, coagent, vehicle semantic role; 2) used to, every predicate having the object category as tool, heatsource, object. For example, in the statement \u201cthe person opens the oven [item]\u201d an affordance can be intended as the fact that an oven can be opened. These attributes extend the set of abstract attributes. The abstract attributes do not depend on the reference image so they can be reused in other contexts as well. Situated attributes Since the images contained in GuessWhat?! come from the MSCOCO dataset (see Figure 1 for an example), some of them are included in the VisualGenome (Krishna et al., 2017) dataset, which is composed of rich scene graphs for every image. In particular, we veri\ufb01ed that 27, 155 images from the GuessWhat?! dataset are also contained in VisualGenome. However, due to the presence of possible visual elements, the VisualGenome images are not the same as the MSCOCO ones. We use a heuristic approach based on both Intersection over Union (IoU) and language-only features to match the object bounding boxes between the two images. We report more details about the algorithm in Appendix A.2. The set of object attributes from VisualGenome (attribute types, colour, size, etc.) and location/positional attributes (one of top/bottom/left/right/centre, based on bounding box location) make up the situated attributes, which are speci\ufb01c to the reference image. As a \ufb01nal step, due to the image mismatch, we decided to include the original GuessWhat?! object annotations in the VisualGenome graph in case a GuessWhat?! object cannot be mapped to a VisualGenome one. By doing this, we have access to the MSCOCO category of the object from which we can recover all its abstract attributes. 4 CompGuessWhat?! Evaluation 4.1 Guesser accuracy evaluation We consider the guesser accuracy metric (in gameplay mode2) from the GuessWhat?! dataset for our goal-oriented evaluation. It measures how many 2A gameplay involves three trained models that generate dialogues given a pair of (image, target object). times the guesser model can select the correct target object among the candidate objects, given the dialogue generated so far. Due to the importance of this language game for NLU and NLG model skills, we decide to keep the guesser accuracy as a reference metric to assess the ability of the questioner to play the game. However, unlike the original dataset evaluation, we make sure that the score is evaluated ignoring duplicated dialogues.3 4.2 Attribute Prediction Evaluation In a sequential guessing game like the one in Figure 1, we regard the representation for the last turn of the dialogue as a composition or aggregation of all the attributes speci\ufb01ed so far. Therefore, we can use it to predict with high accuracy the attributes associated with a speci\ufb01c target object because it should encode the information needed to correctly discriminate the target from all the other objects in the scene. In the dialogue of Figure 1, when the model generates a representation for the last turn of the conversation (i.e., \u201cQ: Is it the microwave? A: Yes\u201d), it should encode the fact that \u201cit is an appliance\u201d, \u201cit is not the oven\u201d and \u201cit is the microwave\u201d, allowing the agent to guess the target object correctly. By playing several guessing games that have a microwave as the target object, the agent should learn a representation of microwave that is expressive enough to correctly discriminate a microwave from all the other objects in a scene. In this setup we are not assuming that the model has a single representation for the concept of microwave; rather the concept of microwave develops from aggregating multimodal information related to microwaves across the situations in which the object is experienced (Barsalou, 2017). In the context of CompGuessWhat?!, every successful dialogue involving a microwave as the target object will be considered as an experience. We are interested in understanding whether the dialogue state representation generated by a neural model for the last turn of the dialogue can encode the attributes of the target object speci\ufb01ed during the dialogue. To do so, we de\ufb01ne four attribute prediction tasks. For every target object we predict the corresponding vector composed of: 1) abstract attributes only (A); 2) situated attributes only (S), 3In the test dataset multiple conversations are associated with the same (image, target object) pair. Therefore, we want the pair (image, target object) to be considered only once in the accuracy evaluation. \f3) the union of abstract and situated attributes (AS), and 4) location attributes (L) such as center, top, bottom, right and left. After training the model on the original GuessWhat?! dataset, we can generate dialogue representations corresponding to all the CompGuessWhat?! successful games. Then, we can train a diagnostic classi\ufb01er that predicts the attributes associated with a given object category using the dialogue hidden representation generated for a given game as features. We hypothesise that a model that has learned grounded representations that are expressive enough to correctly guess the target object should retain the relevant features to predict its attributes. We treat the attribute-prediction problem as a multi-label classi\ufb01cation task. We implement our diagnostic classi\ufb01er \u03a6 as a linear transformation parameterised by a weight matrix Rdd\u00d7da (where dd is the dialogue hidden state size and da is the number of attributes to be predicted) followed by a sigmoid activation function. We use a sigmoid activation function because it models a Bernoulli distribution. The diagnostic classi\ufb01er outputs da logits where each of them models the probability P(yk = 1|d) (where d is dialogue state representation), one for each attribute yk to be predicted. To mitigate a possible class-imbalance problem, we apply a \ufb01ltering strategy to remove underrepresented attributes from our attribute set, which is a similar technique used to deal with out-ofvocabulary words. We also decided to avoid using class-weighting so that we could evaluate the power of the learned representations with simple linear classi\ufb01ers as done in previous work using probing classi\ufb01ers (Belinkov and Glass, 2019). Please refer to Appendix A.3 for details about the procedure to derive the reference set of attributes. We use the CompGuessWhat?! dataset split as the reference for our training and evaluation setup: we train the diagnostic classi\ufb01ers on CompGuessWhat?! gold training dialogues and evaluate their performance on the test dialogues using the validation set dialogues for early stopping. We consider Precision, Recall, and F1-measure for multilabel classi\ufb01cation (Sorower, 2010) (computed as macro-average) and evaluate them with 0.5 as the threshold value for the sigmoid activation function (selected after considering the models performance using threshold values of 0.75 and 0.9). We report additional details in Appendix A.3. 4.3 Zero-shot Evaluation Assuming that the model has learned to compose concepts during the turns of the dialogue, we hypothesise that it should also be able to use these representations to play games involving target objects that belong to categories that have never been seen before. For example, humans can discriminate between a dolphin and a dog even though they might not know what it is called. The measure presented in this section has the potential to demonstrate whether current models lack the ability to systematically generalise to new instances that are composed of attributes learned during training. In order to assess the true generalisation power of the trained agents, we de\ufb01ne a zero-shot learning scenario based on the nocaps dataset images (Agrawal et al., 2018). The nocaps dataset is composed of 3 evaluation splits: 1) in-domain: annotated objects belong to MSCOCO categories only; 2) near-domain: contains a mixture of MSCOCO and OpenImages objects; 3) out-of-domain: contains only OpenImages object categories. Since the number of categories in the original GuessWhat?! dataset (80) is lower than the number of categories in the Open Images dataset (660) \u2013 contained in nocaps \u2013 there are many categories that are never seen during training. Therefore, we can create zero-shot learning games by considering a target object for the game whose category has never been seen during training. We de\ufb01ne an automatic procedure to generate the set of reference games for the zero-shot learning setup using the nocaps images. We split the nocaps images into near-domain or out-of-domain. An image is considered neardomain if it contains at least one object whose category belongs to MSCOCO. In contrast, we consider the image out-of-domain if it does not contain any MSCOCO category objects. This procedure generates a dataset of 19, 179 near-domain reference games and 18, 672 out-of-domain reference games. More details about the automatic procedure as well as the resulting reference set of games can be found in Appendix A.4. As a last step of our evaluation framework, we evaluate the performance of the state-of-the-art models in the zero-shot gameplay setup. For this task, the trained models need to interact with each other and generate dialogues given the pair (image, target object). As an evaluation metric for this task, we consider gameplay guesser accuracy for the near-domain (ND-Acc) and outof-domain (OD-Acc) reference games. \fGameplay Attribute Prediction Zero-shot Gameplay GroLLA Accuracy A-F1 S-F1 AS-F1 L-F1 ND-Acc OD-Acc Random 15.81% 15.1 0.1 7.8 2.8 16.9% 18.6% 13.3 GloVe 34.6 29.7 36.4 33.6 ResNet 24.5 31.7 27.9 43.4 GDSE-SL-text 57.0 45.3 57.5 46 GDSE-CL-text 56.9 45.0 57.3 45 DeVries-SL 41.5% 46.8 39.1 48.5 42.7 31.3% 28.4% 38.5 DeVries-RL 53.5% 45.2 38.9 47.2 42.5 43.9% 38.7% 46.2 GDSE-SL 49.1% 59.9 47.6 60.1 48.3 29.8% 22.3% 43.0 GDSE-CL 59.8% 59.5 47.6 59.8 48.1 43.4% 29.8% 50.1 Table 1: Results for state-of-the-art models on the CompGuessWhat?! suite of evaluation tasks. We assess model quality in terms of gameplay accuracy, the attribute prediction quality, measured in terms of F1 for the abstract (AF1), situated (S-F1), abstract+situated (AS-F1) and location (L-F1) prediction scenario, as well as zero-shot learning gameplay. The \ufb01nal score GROLLA is a macro-average of the individual scores. We use the models GloVe, ResNet and GDSE-*-text only as a baseline for the attribute prediction tasks. 5 Results: Model Evaluation using CompGuessWhat?! Guesser accuracy We evaluate the GDSE and DeVries models in gameplay mode using the set of reference games provided in CompGuessWhat?!. As shown in Table 1, the results are in line with the performance of the models on the original GuessWhat?! dataset (de Vries et al., 2017; Shekhar et al., 2019) con\ufb01rming that our \ufb01ltering strategy did not affect the complexity of the task. Attribute Prediction We use the CompGuessWhat?! benchmark to compare several dialogue state representations: DeVries-SL: the representation learned by the Questioner model presented in (de Vries et al., 2017) that generates the question tokens conditioned on the image features and is trained using Supervised Learning (SL). DeVries-RL: the representations learned by the Questioner model presented in (de Vries et al., 2017), \ufb01ne-tuned using the Reinforcement Learning procedure proposed in (Strub et al., 2017). GDSE-SL: the grounded dialogue state learned by a seq2seq model trained using the multi-task Learning procedure in (Shekhar et al., 2019). GDSE-CL: the grounded dialogue state learned by the Questioner model used in GDSE-SL, \ufb01netuned with the Collaborative Learning procedure presented in (Shekhar et al., 2019). GDSE-SL-text: the learned LSTM (Hochreiter and Schmidhuber, 1997) dialogue encoder of the GDSE-SL model. GDSE-CL-text:4 the learned dialogue encoder of the GDSE-CL model. In order to control for possible bias in our task, we consider unimodal (Thomason et al., 2019a) as well as random attribute predictors: GloVe: a dialogue is represented as the average of the GloVe embeddings associated with each word (Pennington et al., 2014). ResNet: uses the latent representation of the reference scene generated by a ResNet152 as proposed in Shekhar et al. (2019). Random: samples da scores from U(0, 1) where samples are independent from each other. We incorporate this baseline as a lower bound performance on the attribute prediction task. With the AP task, we try to answer the following question: \u201cDo the representations associated with the target object encoding provide useful information that can be exploited to predict the object attributes correctly?\u201d We assume that, due to the nature of the CompGuessWhat?! games, the \ufb01nal dialogue state representation should encode relevant features of the target object. So, a high gameplay accuracy should correlate with a high AP score. Table 1 summarises the results of the attribute prediction task evaluated on the CompGuessWhat?! 4We could use the dialogue encoder of the GDSE models only due to their modular architecture. It was not possible to properly separate the dialogue encoder from the visual representation in the DeVries models. \ftest games. As the average best model performance was only 44.27, far from ceiling, our hypothesis is only partially supported. In particular, the models having the highest guesser accuracy, GDSE-CL and GDSE-SL, seem to learn better representations than unimodal baselines GloVe and ResNet, con\ufb01rming the importance of multi-modal training for this task. There is also a gap in performance between the GDSE and DeVries models. This might be related to the multi-task learning strategy used by GDSE models that favours the emergence of more expressive representations than the ones learned by DeVries models which are trained in isolation. By comparing the enhanced versions GDSE-CL and DeVries-RL with the less sophisticated ones, GDSE-SL and DeVries-SL, respectively, we observe that, despite their higher guesser accuracy, these models do not have any advantage in terms of the AP task. We believe that this is because the Collaborative training strategy (for GDSE-CL) and Reinforcement Learning (for DeVries-RL) are optimising end-goal performance while sacri\ufb01cing the expressiveness of the representations. Finding a way to encode taskspeci\ufb01c representations and generalise them to learn abstract representations becomes an important research direction to improve on this task. As an additional ablation, we compared the representations learned by the LSTM module used by GDSE to encode the dialogue (GDSE-*-text) with their grounded dialogue state counterpart. Differences in terms of F1 are minimal, con\ufb01rming that the heavy lifting is done by the textual representations and it is not clear how well the grounded dialogue state retains the visual information. Another con\ufb01rmation of this issue is provided by the results in terms of location attributes prediction. Performance in this task for all the models is around 40 meaning that both VGGNet and ResNet features (used for DeVries and GDSE, respectively) are not able to recover \ufb01ne-grained object information. This result sheds light on the ability of these models to ground the textual data in perceptual information of the reference scene. We believe that models should be able to co-ground one modality with the other and, as a result, learn more expressive grounded representations. Zero-shot Evaluation Results are summarised in Table 1; the most striking observation is that all models struggle with this dataset (guesser accuracy is barely above 40), although arguably humans would be able to solve the task despite their unfamiliarity with a speci\ufb01c object. Indeed, in this zeroshot scenario, reusing previously learned attributes that are shared among the objects or leveraging mutual exclusivity (Markman and Wachtel, 1988) would result in a successful gameplay. Even the most accurate model in the CompGuessWhat?! guesser evaluation performs poorly in this zero-shot setup (see Figure 3 for an example). We attribute this drop in performance to the way that these models represent objects. In particular, they all rely on category embeddings, i.e., latent representations associated to speci\ufb01c object categories (refer to (Shekhar et al., 2019; de Vries et al., 2017) for more details). In the case of ZS evaluation, when an object is unknown, its category embedding is also not available. This is true for both DeVries and GDSE models; it seems that GDSE models suffer more than DeVries models possibly due to over\ufb01tting. On the other hand, we aim to learn object representations which are not associated with manually-provided categories but are obtained by playing the game and that encode both abstract and situated attributes. Once again, we \ufb01nd that models optimised using Reinforcement Learning seem to learn a better game strategy that results in higher performance on both near-domain and out-of-domain games. To better understand the quality of the generated dialogues, we classify each type of question according to a pre-de\ufb01ned set of types based on (Shekhar et al., 2019) (please refer to Appendix A.5 for a detailed description and a detailed summary of the evaluation results). We noticed that the DeVries models generate dialogues with 70% of their turns comprising \u201clocation\u201d questions (e.g., \u201cis it the person on the right?\u201d) compared to 20% for GDSE models. We argue that to tackle zero-shot scenes, a model should instead learn features useful to discriminate the target object without relying on locations. Of course, in some reference scenes, location questions are still useful attributes used by humans when playing the game. In addition, asking location questions is an effective strategy because the Oracle has access to positional information that can be used to provide reliable answers but does not have any category embeddings for the target object. 6 Related Work Text-only Multi-task Evaluation In recent years progress in NLP has been driven by multi\fTarget object: crocodile Question Answer is it a bird? no is it a vehicle? no is it a person? no is it wood? no is it a light? no is it near the person? yes is it to the right of the person? no is person wearing it? no is the person holding it? yes GDSE-CL Failure Question Answer is it an animal? no is it a picture? no is it a toilet? no is it tie? no is it a person? no is it a person? yes is it in the left? no is it in the middle? yes is it in the top? yes DeVries-RL Failure Figure 3: Example gameplay in the out-of-domain scenario of the two best performing systems GDSE-RL and DeVries-RL. The models have to play the game considering the crocodile as target object. This is a zero-shot scenario because the object category crocodile is not among the MSCOCO categories. task evaluation datasets proposed to mitigate the biases of task-speci\ufb01c datasets (Wang et al., 2018; McCann et al., 2018; Wang et al., 2019). Despite their multi-task nature, these datasets focus on textonly data making the resulting models unable to learn meaning representations which are grounded in perceptual experience (Andrews et al., 2009; Riordan and Jones, 2011). Another downside is that these benchmarks focus only on end-goal metrics, i.e., are not informative on what the model has learned. Going beyond the end-goal metric is fundamental for designing models that are more generalisable and interpretable. By introducing the attribute prediction task in our framework, we assess whether the learned representations are expressive enough to predict the attributes of relevant objects in the scene. Also, we propose a zero-shot evaluation where the model has to generate predictions for examples that have never been seen during training, thus providing a way to understand the generalisation power of the learned representations. Grounded Language Learning Evaluation Several grounded language learning tasks have been proposed in the literature that can be divided into discriminative (Shekhar et al., 2017; Hudson and Manning, 2019; Suhr et al., 2019) and generative grounded language learning tasks (Xu et al., 2015; Agrawal et al., 2018). Recent works proposed models trained in a multi-task fashion by exploiting several language/vision tasks. The dodecaDialogue task (Shuster et al., 2019) proposes twelve dialogue tasks, among which there are two language/vision tasks in which the agent has to generate a response for a given context. Other works try to exploit multi-task learning to improve on single-task model performance in discriminative tasks (Pramanik et al., 2019; Lu et al., 2019). Unfortunately, implementing multi-task learning using different datasets results is cumbersome (Subramanian et al., 2018). We propose an evaluation framework that can be applied in the context of a single task and dataset (e.g. GuessWhat?!) that allows to understand the extent to which the model can learn useful representations for the task at hand. Inspecting the learned representations is important because, due to biases in the datasets, models might learn spurious correlations between input and output rather than actual grounding capabilities. For instance, in Visual Question Answering, questions starting with \u201cWhat colour are\u201d have \u201cwhite\u201d as a correct answer 23% of the time; models learn to memorise this sort of association rather than using the visual information (Zhang et al., 2016; Agrawal et al., 2016). This issue calls for a model evaluation aimed at inspecting the model representations as well as how these representations are used. The GQA (Hudson and Manning, 2019) dataset goes in this direction. It presents a Visual Question Answering dataset where images are supported by rich semantic annotations in the form of scene graphs. The GQA task requires the model to select an answer among a set of candidates. However, we advocate the importance of tasks that involve both Natural Language Understanding (NLU) and Natural Language Generation (NLG) skills in a curriculum for grounded language learning. There are signi\ufb01cant differences concerning the proposed auxiliary tasks as well. First of all, GQA\u2019s tasks are speci\ufb01cally designed around the VQA tasks to make sure that the model is consistent and plausible. It does not however tell us what the model\u2019s learned representations are encoding. We propose the AP task as a diagnostic task \faimed at better understanding the learned neural representations (Belinkov and Glass, 2017; Conneau et al., 2018; Peters et al., 2018; Tenney et al., 2019). In addition, going beyond simple object classi\ufb01cation is considered bene\ufb01cial for vision systems (Farhadi et al., 2009) because it allows generalisation across object categories, not just across instances within a category. However, we believe that to truly assess the generalisation ability of a model, object attributes have to be used for the downstream task, which is not necessarily needed in object classi\ufb01cation tasks. With the ZS evaluation, we investigate the ability of the models to exploit more \ufb01ne-grained visual attributes which is important for models able to learn from few examples and easily transfer to new domains. Compositionality Evaluation Andreas (2019) presents a method to estimate the degree of compositionality of neural representations by using an oracle compositional model aware of the compositional structure (i.e., a derivation) of the input data. Building a reference oracle is easy for synthetic scenes (as in Andreas (2019)) but is a signi\ufb01cant challenge for real-world scenes. Previous work has studied compositionality in real-world scenes for visual concept composition (Misra et al., 2017) and image captioning (Nikolaus et al., 2019). In our benchmark CompGuessWhat?!, we use realworld scenes from the MSCOCO (Lin et al., 2014) and OpenImages (Kuznetsova et al., 2018) datasets. Our AP task is related to measuring compositionality. It relies on image annotations in the form of intensional and extensional attributes as a reference structure for the objects in the scene. 7 Conclusions & Future Work We proposed CompGuessWhat?! as an implementation of GROLLA, a multi-task evaluation framework for Grounded Language Learning with Attributes. We found that the best performing model achieves a GROLLA score of 50.06%; notably this model\u2019s out-of-domain accuracy is under 30%, as compared to the human performance on the original GuessWhat?! dataset of 90.2% (de Vries et al., 2017). Clearly, even models with high in-domain gameplay success rates still have dif\ufb01culty generalising to new scenarios. In the following, we discuss insights gained from the evaluation and new research directions for this task. The attribute prediction task shows that model representations are not able to accurately recover attribute representations. We argue that this result calls for new approaches to exploiting and representing textual and visual data. We believe that models should be equipped with a co-grounding operator that fuses the textual and visual modalities. For instance, in the context of CompGuessWhat?!, it would be used to learn a representation for the current turn that is in\ufb02uenced by both the language and visual modality. CompGuessWhat?! requires models to learn to combine the co-grounded information provided for every turn. Therefore, we propose that CompGuessWhat?! represents a benchmark dataset for evaluating the design of such an attribute compositionality operator that would be a possible implementation of compositionality la Barsalou (2017). In this work, we have shown how our multi-task evaluation framework can be be applied to GuessWhat?!. However, the same framework could be applied to other multi-modal tasks. For example, in image captioning, the goal-oriented evaluation would be the textual similarity metrics (e.g. BLEU); the attribute-prediction task would use the decoder representation to predict the attributes of the objects in the image (Elliott and K\u00b4 ad\u00b4 ar, 2017, e.g.); and the zero-shot setting could leverage the nocaps dataset (Agrawal et al., 2018). Likewise, in the Vision-and-Dialog navigation task (Thomason et al., 2019b), the goal-oriented evaluation is the navigation task; attribute prediction is based on predicting the attributes of the hidden object when the agent decides it is in the correct room, and the zeroshot setting could evaluate model performance on novel combinations of rooms and object types. Finally, from the evaluation presented here, it emerges that these models learn task-speci\ufb01c representations that do not generalise to unseen object categories. We hope that GROLLA and the CompGuessWhat?! data will encourage the implementation of learning mechanisms that fuse taskspeci\ufb01c representations with more abstract representations to encode attributes in a more compositional manner. In addition, we will use the CompGuessWhat?! image annotations to design a visual grounding evaluation to assess the ability of the model to attend to the correct objects during the turns of the dialogue. Acknowledgements We thank Arash Eshghi and Yonatan Bisk for fruitful discussions in the early stages of the project."
+ }
+ ],
+ "Arash Eshghi": [
+ {
+ "url": "http://arxiv.org/abs/2308.11683v1",
+ "title": "Learning to generate and corr- uh I mean repair language in real-time",
+ "abstract": "In conversation, speakers produce language incrementally, word by word, while\ncontinuously monitoring the appropriateness of their own contribution in the\ndynamically unfolding context of the conversation; and this often leads them to\nrepair their own utterance on the fly. This real-time language processing\ncapacity is furthermore crucial to the development of fluent and natural\nconversational AI. In this paper, we use a previously learned Dynamic Syntax\ngrammar and the CHILDES corpus to develop, train and evaluate a probabilistic\nmodel for incremental generation where input to the model is a purely semantic\ngeneration goal concept in Type Theory with Records (TTR). We show that the\nmodel's output exactly matches the gold candidate in 78% of cases with a\nROUGE-l score of 0.86. We further do a zero-shot evaluation of the ability of\nthe same model to generate self-repairs when the generation goal changes\nmid-utterance. Automatic evaluation shows that the model can generate\nself-repairs correctly in 85% of cases. A small human evaluation confirms the\nnaturalness and grammaticality of the generated self-repairs. Overall, these\nresults further highlight the generalisation power of grammar-based models and\nlay the foundations for more controllable, and naturally interactive\nconversational AI systems.",
+ "authors": "Arash Eshghi, Arash Ashrafzadeh",
+ "published": "2023-08-22",
+ "updated": "2023-08-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "main_content": "Introduction People process language incrementally, in realtime (see Crocker et al. (2000); Ferreira (1996); Kempson et al. (2016) among many others), i.e. both language understanding and generation proceed on a word by word rather than a sentence by sentence, or utterance by utterance basis. This real-time processing capacity underpins participant coordination in conversation 1All relevant code, models, and data are available at https://bitbucket.org/dylandialoguesystem/ dsttr/src/dsttr_arash_a/ (Gregoromichelaki et al., 2012, 2020) and leads to many characteristic phenomena such as splitutterances (Poesio and Rieser, 2010; Purver et al., 2009), mid-utterance feedback in the form of backchannels (Heldner et al., 2013) or clarification requests (Healey et al., 2011; Howes and Eshghi, 2021), hesitations, self-repairs (Schegloff et al., 1977) and more. Language generation \u2013 our focus here \u2013 is just as incremental as language understanding: speakers normally do not have a fully formed conceptualisation or plan of what they want to say before they start articulating, and conceptualisation needs only to be one step ahead of generation or articulation (Guhe, 2007; Levelt, 1989). This is possible because speakers are able to continuously monitor the syntax, semantics, and the pragmatic appropriateness of their own contribution (Levelt, 1989) in the fast, dynamically evolving context of the conversation. In turn this allows them to pivot or correct themselves on the fly if needed, e.g. because they misarticulate a word, get feedback from their interlocutors (Goodwin, 1981), or else the generation goal changes due to the dynamics of the environment. Real-time language processing is likewise crucial in designing dialogue systems that are more responsive, more naturally interactive (Skantze and Hjalmarsson, 2010; Aist et al., 2006), and are more accessible to people with memory impairments (Addlesee et al., 2019; Addlesee and Damonte, 2023; Nasreen et al., 2021). Despite this importance, relative to turn-based systems, it has received little attention from the wider NLP community; perhaps because it has deep implications for the architecture of such systems (Schlangen and Skantze, 2009; Skantze and Schlangen, 2009; Kennington et al., 2014), which make them much harder to build and maintain. In this paper, we extend the work of Purver and Kempson (2004); Hough and Purver (2012); arXiv:2308.11683v1 [cs.CL] 22 Aug 2023 \fHough (2015), who lay the theoretical foundations for incremental generation and later the processing of self-repairs in Dynamic Syntax (Kempson et al., 2001, 2016, Sec. 2.3). For the first time, we develop a probabilistic model for incremental generation (Sec. 3) that conditions next word selection on the current incrementally unfolding context of the conversation, and also on features of a purely semantic generation goal concept, expressed as a Record Type (RT) in Type Theory with Records (Cooper, 2012; Cooper and Ginzburg, 2015). The model is trained and evaluated on part of the CHILDES corpus (MacWhinney, 2000) using an extant grammar that was learned by Eshghi et al. (2013) from the same data. Results show that in the best case, the model output matches the gold generation test candidate in 83% of cases (Sec. 4.2). We then go on to experiment with and evaluate the ability of the same model to generate self-repairs in a zero-shot setting in the face of revisions to the goal concept RT under various conditions (Sec 4.3): viz. for forward-looking and backward-looking repair and at different distances from the reparandum. Automatic evaluation shows that it can generate self-repairs correctly in 85% of cases. A small human evaluation confirms the overall naturalness and grammaticality of the generated repairs. Overall, these results further highlight the generalisation power of grammar-based models (see also Mao et al. (2021); Eshghi et al. (2017) and lay the foundations for more controllable, and naturally interactive conversational AI systems. 2 Dynamic Syntax and Type Theory with Records (DS-TTR) Dynamic Syntax (DS, Kempson et al., 2016; Cann et al., 2005; Kempson et al., 2001) is a processoriented grammar formalism that captures the real-time, incremental nature of the dual processes of linguistic comprehension and production, on a word by word or token by token basis. It models the time-linear construction of semantic representations (i.e. interpretations) as progressively more linguistic input is parsed or generated. DS is idiosyncratic in that it does not recognise an independent level of structure over words: on this view syntax is sets of constraints on the incremental processing of semantic information. The output of parsing any given string of words is thus a semantic tree representing its predicateargument structure (see Fig. 1). DS trees are always binary branching, with argument nodes conventionally on the right and functor nodes to the left; tree nodes correspond to terms in the lambda calculus, decorated with labels expressing their semantic type (e.g. Ty(e)) and formulae \u2013 here as record types of Type Theory with Records (TTR, see Sec. 2.1 below); and beta-reduction determines the type and formula at a mother node from those at its daughters (Fig. 1). These trees can be partial, containing unsatisfied requirements potentially for any element (e.g. ?Ty(e), a requirement for future development to Ty(e)), and contain a pointer, \u2662, labelling the node currently under development. Grammaticality is defined as parsability in a context: the successful incremental word-by-word construction of a tree with no outstanding requirements (a complete tree) using all information given by the words in a string. We can also distinguish potential grammaticality (a successful sequence of steps up to a given point, although the tree is not complete and may have outstanding requirements) from ungrammaticality (no possible sequence of steps up to a given point). Fig. 1 shows \u201cJohn arrives\u201d, parsed incrementally, starting with the axiom tree with one node (?Ty(t)), and ending with a complete tree. The intermediate steps show the effects of: (i) DS Computational Actions (e.g. Completion which moves the pointer up and out of a complete node or Anticipation which moves the pointer down from the root to its functor daughter.) which are languagegeneral and apply without any lexical input whenever their preconditions are met; and (ii) Lexical Actions which correspond to words and are triggered when a word is parsed. Context In DS, context, required for processing various forms of context-dependency \u2013 including pronouns, VP-ellipsis, and short answers, as well as self-repair \u2013 is the parse search Directed Acyclic Graph (DAG), and as such, is also process-oriented. Edges correspond to DS actions \u2013 both Computational and Lexical Actions \u2013 and nodes correspond to semantic trees after the application of each action (Sato, 2011; Eshghi et al., 2012; Kempson et al., 2015). Here, we take a coarser-grained view of the DAG with edges corresponding to words (sequences of computational actions followed by a single lexical action) rather than single actions, and we drop abandoned parse \fFigure 1: Incremental parsing in DS-TTR: \u201cJohn arrives\u201d paths (see Eshghi et al., 2015; Howes and Eshghi, 2021, for details) \u2013 Fig. 4 shows an example. 2.1 Type Theory with Records (TTR) Dynamic Syntax is currently integrated with TTR (Cooper, 2012, 2005) as the semantic formalism in which meaning representations are couched (Eshghi et al., 2012; Purver et al., 2011, 2010)2. TTR is an extension of standard type theory, and has been shown to be useful in contextual and semantic modelling in dialogue (see e.g. Ginzburg, 2012; Fern\u00e1ndez, 2006; Purver et al., 2010, among many others), as well as the integration of perceptual and linguistic semantics (Larsson, 2013; Dobnik et al., 2012; Yu et al., 2017). With its rich notions of underspecification and subtyping, TTR has proved crucial for DS research in the incremental specification of content (Purver et al., 2011; Hough, 2015); specification of a richer notion of dialogue context (Purver et al., 2010); models of DS grammar learning (Eshghi et al., 2013); and models for learning dialogue systems from data (Eshghi et al., 2017; Kalatzis et al., 2016; Eshghi and Lemon, 2014). In TTR, logical forms are specified as record types, which are sequences of fields of the form [ l \u2236T ] containing a label l and a type T. Record types can be witnessed (i.e. judged true) by records of that type, where a record is a sequence of label-value pairs [ l = v ]. We say that [ l = v ] is of type [ l \u2236T ] just in case v is of type T. Fields can be manifest, i.e. given a singleton type e.g. [ l \u2236Ta ] where Ta is the type of which only a is a member; here, we write this as [ l=a \u2236T ]. Fields can also be dependent on fields 2DS models the structural growth of representations and is agnostic to the formalism for semantic representation. As such, it has also been combined with RDF (Addlesee and Eshghi, 2021) and with vector-space representations (Purver et al., 2021) preceding them (i.e. higher) in the record type (see Fig. 2). Figure 2: Example TTR record types The standard subtype relation \u2291can be defined for record types: R1 \u2291R2 if for all fields [ l \u2236T2 ] in R2, R1 contains [ l \u2236T1 ] where T1 \u2291T2. In Fig. 2, R1 \u2291R2 if T2 \u2291T2\u2032, and both R1 and R2 are subtypes of R3. This subtyping relation allows semantic information to be incrementally specified, i.e. record types can be indefinitely extended with more information and/or constraints. Additionally, Larsson (2010) defines the meet (\b) operation of two (or more) RTs as the union of their fields; the equivalent of conjunction in FoL; see figure 3 for an example. We will need this below (Sec.3) where we define our probabilistic model. Figure 3: Example of merge operation between two RTs 2.2 Generation in DS-TTR As alluded to in the introduction, to handle typical incremental phenomena in dialogue such as split utterances, interruptive clarification requests or self-repair, any generation model must be as incremental as interpretation: full syntactic and semantic information should be available after generating every word with continual access to the incrementally unfolding context of the conversation (Hough and Purver, 2012; Eshghi et al., 2015). \fIn generation, there is an extra requirement on models, namely representational interchangability (Eshghi et al., 2011): parsing and generation should employ the same mechanisms and use the same kind of representation so that parsing can pick up where generation left off, and vice versa. DS-TTR can meet these requirements, because generation employs exactly the same mechanisms as in parsing (Purver and Kempson, 2004) with the simple addition of a subsumption check against a generation goal concept, expressed as a Record Type (RT) in TTR (see Sec. 2.1); and where this goal concept can be partial (does not need to correspond to a complete sentence), and need only to be one step ahead of the generated utterance so far. This ease of matching incrementality in both generation and parsing is not matched by other models aiming to reflect incrementality in the dialogue model while adopting relatively conservative grammar frameworks, some matching syntactic requirements but without incremental semantics (Skantze and Hjalmarsson, 2010), others matching incremental growth of semantic input but leaving the incrementality of structural growth unaddressed (Guhe, 2007). As such, generation involves lexical search whereby at every step, words from the lexicon are test-parsed in order to find words that (i) are parsable in the current context; and (ii) the resulting TTR semantics of the current DS tree subsumes or is monotonically extendable the generation goal. The subsumption relation is the inverse of the subtype relation defined above (see Sec. 2.1; i.e. R1subsumes R2 iff R2 \u2291R1). Without a probabilistic model for word selection at each step of generation, this process is effectively brute-force, computationally very inefficient, and therefore simply impractical, especially with large lexicons. This is the shortcoming that we address here for the first time by conditioning word selection on the generation goal RT. This involves learning, through Maximum Likelihood Estimation from data, P(w\u2223T,Rg), where w ranges over the lexicon, T is the current DS tree including its maximal semantics, and Rg is the generation goal. This parametrisation is described in full below in Sec. 3. 2.3 Processing Self-repair in DS-TTR In this section, we briefly introduce the DS model of self-repair from (Hough and Purver, 2012): there are two types of self-repair that are addressed: backward-looking repair (aka. overt repair), where the repair involves a local, and partial restart of the reparandum, as in (1) and forwardlooking repair (aka. covert repair) where the repair is simply a local extension, i.e. a further specification of the reparandum as in (2). (1) \u201cSure enough ten minutes later the bell r-the doorbell rang\" (Schegloff et al., 1977) (2) \u201cI-I mean the-he-they, y\u2019know the guy, the the pathologist, looks at the tissue in the microscope...\u201d (Schegloff et al., 1977) In the model set out above, a backward-looking repair arises due to an online revision of a generation goal RT, whereby the new goal is not a subtype of the one the speaker (or the dialogue manager) had initially set out to realise. We model this via backtracking along the incrementally available context DAG as set out above. More specifically, repair is invoked if there is no possible DAG extension after the test-parsing and subsumption check stage of generation (resulting in no candidate succeeding word edge). The repair procedure proceeds by restarting generation from the last realised (generated) word edge. It continues backtracking by one DAG vertex at a time until the root record type of the current partial tree is a subtype of the new goal concept. Generation then proceeds as usual by extending the DAG from that vertex. The word edges backtracked over are not removed, but are simply marked as repaired (see also Eshghi et al. (2015) for a fuller account), following the principle that the revision process is on the public conversational record and hence should still be accessible for later anaphoric reference (see Fig. 4). Forward-looking repairs on the other hand, i.e. extensions, where the repair effects an \u201cafterthought\u201d are also dealt with straightforwardly by the model. The DS-TTR parser simply treats these as monotonic extensions of the current tree, resulting in subtype extension of the root TTR record type. Thus, a change in goal concept during generation will not always put demands on the system to backtrack, such as in generating the fragment after the pause in \u201cI go to Paris ...from London\". Backtracking only operates at a semanticssyntax mismatch where the revised goal concept is no longer a subtype of the root record type for the (sub-)utterance so far realised, as in Figure 4. \fFigure 4: Incremental DS-TTR generation of a self-repair upon change of goal concept. Type-matched record types are double-circled nodes and edges indicating failed paths are dotted. 3 Probabilistic Model of Generation In this section, we follow on from Sec. 2.3 above and describe the probabilistic model that we have developed for incremental probabilistic generation. First we describe the model itself, its parameters, and how these are estimated from data. Then we describe how the model is used at inference time to generate. Model and Parameter Estimation As noted, generation in Dynamic Syntax is defined in terms of parsing. Specifically, it proceeds via lexical search, i.e. test-parsing (all) words from the lexicon while checking for subsumption against the goal concept: a record type (RT) in TTR; henceforth Rg. Words that parse successfully with a resulting (partial) semantics that subsume the goal concept are successfully generated. This process goes on until the semantics of the generated sentence equals the goal. This process is highly inefficient and impractical for larger lexicons. On a high level, we solve this problem by building a probabilistic model which conditions the probability of generating the next word, w, on: (i) Rcur: the semantics of the generated utterance thus far; (ii) Rg, the goal concept; and (iii) the current DS tree (henceforth Tcur). We condition on (i) to allow the model to keep track of the semantics of what\u2019s already been generated, i.e. the left semantic context of generation; on (ii) to aid the model in selecting words that contribute the correct semantic increments to approach the goal concept; and on (iii) to capture the syntactic constraints on what words can grammatically follow. In sum, we need to compute P(w\u2223Tcur,Rcur,Rg) for all the words w in the lexicon. As you will see below, we learn to generate by parsing, and therefore we the use Bayes rule in Eq. 3 to cast probabilistic generation roughly in probabilistic parsing terms: P(w\u2223Tcur,Rcur,Rg) \u00b4\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b8\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b6 probabilistic generation Bayes Rule = probabilistic parsing \u00b3\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b7\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b5 P(Tcur,Rcur,Rg\u2223w) P(w) P(Tcur,Rcur,Rg) (3) On the right hand side of Eq. 3, P(w) is the prior probability of w, which we obtain from the frequency of w in our training data; and P(Tcur,Rcur,Rg) a normalisation constant which we do not need to estimate. We learn P(Tcur,Rcur,Rg\u2223w) from gold data in the form of \u27e8Utt = \u27e8w1,...,wN\u27e9, Rg\u27e9, where Utt is the utterance to be generated, and Rg is its gold semantics. To do this, we use the DS parser to parse Utt yielding a parse path (see e.g. Fig. 4) that starts with the DS axiom tree (empty tree) to the tree whose semantics is Rg together with all the DS trees produced after parsing each wi in between; viz. a sequence S p = {\u27e8T1,w1\u27e9,...,\u27e8TN,wN\u27e9}, where Ti are the DS trees in the context of which the wi\u2019s were parsed. This sequence constitutes the observations from which we estimate P(Tcur,Rcur,Rg\u2223w) by Maximum Likelihood Estimation (MLE). Tcur, Rcur and Rg are all composed of many individual features, and as a whole, would be observed \fvery rarely. Therefore, for generalisation, we need to decompose them and compute the probability of the whole as the conjunction (product) of the probabilities of their individual atomic features. For Tcur we follow Eshghi et al. (2013) and consider only one feature of Tcur: that of the type of the pointed node, or a requirement for a type (e.g. Ty(e), ?Ty(e \u2192t), etc) \u2013 call this Typ. This simplifies the model considerably, but has the downside of not capturing all grammatical constraints (e.g. person constraints in English verbs will not be captured this way), and leading to some overgeneration. We also simplify the model by conditioning on the semantics that remains to be generated \u2013 call it Rinc \u2013 rather than conditioning on both Rcur and Rg. We can compute Rinc each time through the well-defined record type subtraction operation in TTR where: Rinc = Rg/Rcur. With these simplifications, what we need to estimate by MLE from each sequences S p (see above) is: P(Typ,Rinc\u2223w). As noted, for any generalisation at all, Rinc now needs to be decomposed into its individual atomic features so that we can compute the probability of each of these features individually, rather than that of Rinc as a whole. We decompose Rinc as follows: Rinc = \bk(Rk), where \b is the TTR equivalent of the conjunction operation in FoL (see above, Sec. 2.1); and each Rk is potentially dependent on R j where j < k. Using the probabilistic variant of TTR (Cooper et al., 2013, 2014), we can use the chain rule to then derive Eq. 4: P(\t k Rk\u2223w) = \u03a0kP(Rk\u2223w,R1 \t...\tRk\u22121) (4) This then allows us to express the probability of a complex record type in terms of the product of its potentially dependent, atomic supertypes. This, finally, puts us in a position to compute P(Typ,Rinc\u2223w) as follows: P(Rinc,Typ\u2223w) independence = P(Rinc\u2223w) \u22c5P(Typ\u2223w) decompose Rinc = P(\t k Rk\u2223w) \u22c5P(Typ\u2223w) We implement the above procedure by constructing a 2D conditional count table where the rows are the words, and the columns are all the atomic semantic features observed during learning by parsing: effectively the result of decomposing all the Rg\u2019s in our data; this, in addition to all the Typ features we have observed on all the DS trees encountered in the S p sequences above. Then, each time we observe an atomic semantic feature of Rinc, say, Rk, in the context of a word, w, we increment the cell (Rk,w) by 1. After learning, we normalise the columns of the table to obtain all P(F\u2223w) where F ranges over all semantic features and pointed node types, and w over all words in the lexicon. Inference At inference time, we need to estimate P(w\u2223Tcur,Rcur,Rg): a probability distribution over all the words in the lexicon, given the current context of generation, Tcur including the current semantics so far generated, Rcur, and the goal concept, Rg. Given the above we take the following steps to populate a beam for generating the next word: (i) compute Rinc = Rg/Rcur; (ii) compute all the atomic semantic features, Rk \u2013 the headings in the columns in our conditional probability table \u2013 that Rinc triggers or \u2018turns on\u2019. This can be done by checking whether Rinc \u2291Rk; (iii) compute the single Typ (type of pointed node) feature by observing the type of the pointed node on Tcur; (iv) for each row (i.e. each word) take the product (or sum of log probabilities) of all the column features thus triggered in steps (ii) and (iii); (v) sort the words in the lexicon by their probability from (iv) and have the top N fill the beam of size N. Once the beam is thus populated, we use the DS grammar to parse each word in the beam in turn; upon success, that is, if the word is parsable, and the resulting semantics subsumes the goal concept, Rg, we move on to generate the next word incrementally until we reach the goal concept, that is, until Rg \u2291Rcur \u2227Rcur \u2291Rg. Repair mechanism The DS repair mechanism, i.e. that of backtrack and parse / generate (see above Sec. 2.3), is triggered when none of the words in the beam successfully generate; either because neither are parsable, or else their resulting semantics don\u2019t subsume Rg (because it may have been revised). When triggered, the model backtracks over the context DAG path (see above), and, following the same inference process, attempts to (re-)populate the beam and generate from there. Backtracking continues until generation is successful, with the model having generated the interregnum (e.g. \"I mean\", \"sorry I mean\", \"uh\", \"no\", etc.) before it generates the first repair word. \fGeneration continues normally from that point until the (potentially new) goal concept is reached. 4 Evaluation 4.1 Data The data to train and test our model comes from the Eve section of the CHILDES corpus (MacWhinney, 2000). This section was annotated with logical forms (LF) by Kwiatkowski et al. (2012). The LFs were then converted to TTR record types (RT) by Eshghi et al. (2013). This dataset consists of utterances towards children from parents, therefore the sentences have a relatively simple structure than adult language. We will use it in the shape of \u27e8Utterance, Goal Concept\u27e9pairs to train and test our model. For training our generator, we test-parsed the dataset using two versions of the grammar learned by Eshghi et al. (2013): the grammar containing the top 1 hypothesis and the grammar containing the top 3. This resulted in two subsets of the data that could be parsed and in which the produced RT semantics matched the gold semantics exactly. Let\u2019s call these top-1 and top-3 respectively. We report their relevant statistics in Table 1. dataset total samples total words mode length max length type / token ratio top-1 729 2152 3 7 18.08 top-3 1361 4194 3 7 21.96 Table 1: Filtered Dataset Statistics However, even as the top-3 grammar from Eshghi et al. (2013) gives wider parsing coverage, it included many erroneously learned lexical actions. We therefore decided to carry out our experiments below on the top-1 dataset filtered using the top-1 grammar. This is at the expense of not generating sentences that we\u2019d otherwise be able to generate since the overall distribution of the two datasets are similar. Therefore, the results we report below are more conservative (i.e. lower) than those we\u2019d have been able to achieve if we\u2019d manually cleaned up the top-3 grammar and applied it to learning and generation. 4.2 Model Evaluation We evaluate our generation model on the top-1 set in two ways: (i) standard evaluation of generation without any mid-generation revisions to the goal; (ii) we evaluate the capability of the same model to generalise to cases where the goal concept is revised mid-generation, i.e. to cases where the model needs to produce self-repairs. Standard evaluation For this, we report percentage of exact match (EM), ROUGE-1, Rouge2, and ROUGE-l between the gold sentences in the dataset and the output sentences from the model. On the training set, we could observe that out of 656 training samples, we can generate 597 utterances (91.01%) whose semantics exactly matches the generation goal concept; 416 of these fully match the gold sentence, yielding an EM score of 0.6341 (meaning 63.41% of the output sentences fully match the gold sentences). For the test set, out of 73 total samples, 64 sentences were generated fully to the goal concept (87.67%), and 46 of these (63.01%) completely matched the gold sentence in the dataset. Among the outputs not fully match by the gold sentences a large portion of them are very close to an exact match. For example the generated sample, \u201cwhat is that\", where the gold sentence is \u201cwhat\u2019s that\": such samples were not counted initially among the exact matches. We then took these to be exact matches and recomputed evaluation scores. The final results are summarised in Table. 2. EM ROUGE-1 ROUGE-2 ROUGE-l Train 0.84 0.94 0.71 0.92 Test 0.78 0.88 0.67 0.86 Table 2: Evaluation results for generation without any goal concept revisions 4.3 Generating self-repairs: a zero-shot evaluation To evaluate the ability of the model to generate self-repairs in a zero shot setting, we generate a dataset of semantic revisions to the goal concept using the original top-1 data. We use the Stanford POS tagger to automatically generate a set of revisions, where each revision is a tuple, \u27e8Rg,index,Rr,Uttr, forward\u27e9: Rg: is the original goal concept; index: is the position along the generation path where the revision takes place; Rr: is the revised goal; Uttr: is the result of replacing a single word in the original gold utterance with a word from our data of the same POS \u2013 Rr now corresponds to the (revised goal) semantics of Uttr; and, finally: forward: is either true or false, marking whether the revised semantic material has already been contributed before index or not; if true, we would expect a forward\flooking self-repair, and otherwise a backwardlooking one (see Sec. 2.3 above). We derive these revision tuples for every utterance in the dataset with length greater than 4, and on the following Parts of Speech: {NOUN, ADJ, PROPN, ADP, ADV}. These tuples therefore give us 4 experimental conditions, across two binary factors: (i) locality: is the point at which the revision is made strictly local to the repairandum; or does it have a distance of more than 1; (ii) Is the revision after or before the corresponding semantic contributions have been made? We then run the revisions through the model and evaluate the output automatically as follows: we use a simple rule-based algorithm to \u2018clean out\u2019 the self-repair from the model output, and compare this to the revised utterance, Uttr. For this comparison, we only report EM \u2013 see Table 3. We observed 641 of the generatable revisions in total are an exact match. forward-looking backward-looking local 0.93 0.89 distant 0.73 0.82 Table 3: EM for zero-shot evaluation of repairs Since we do not have gold data for self-repairs, we did a small human evaluation on the model output: the authors each independently annotated a subset of 30 examples, assigning scores on a Likert scale from 1 to 3 for: (a) grammaticality of the self-repairs; and (b) their human-likness or naturalness, which initially led to a low agreement. They then met to discuss the disagreements in order to iron out the differences between the criteria they had applied. They then continued to annotate 70 additional system outputs. This led to a Krippendorff\u2019s alpha score of 0.88 for grammaticality and 0.82 for naturalness, demonstrating very high agreement. To then report the average scores given by the human annotators, the lower score was chosen when there was a disagreement, resulting in 2.72 and 2.28 mean scores for grammaticality and naturalness respectively, confirming the quality of the generated output. 5 Discussion During the error analysis we observed the following error patterns: In the standard evaluation of generation, there were 199 instances where the model had fully generated to the goal concept, while the generated output did not match the gold utterance. Many were cases where the model had generated a statement instead of a question or vice versa (e.g. \"I may see them\" is generated over \"may I see them\"). In a few cases, the generated output was ungrammatical with the wrong word order: both of these are caused by the original grammar from Eshghi et al. (2013) overgenerating \u2013 this is acknowledged by the authors, and it is due to the fact that their induced grammar did not capture the full set of syntactic constraints present in their data. This is in turn because they were only conditioning their search on the type of the pointed node, like we do here. Inducing the full set of syntactic constraints was left to future work, as it is here. 5.1 Limitations Our evaluation in this paper has at least two important limitations: (1) We evaluate our incremental generation model on a small, and relatively simple dataset (leading to high ROUGE scores because of the little variation in data and relative similarity between training and testing sets) due to the fact that we currently do not have access to a wider coverage grammar. However, this was a conscious choice on the authors\u2019 part: we used a learned grammar to induce our probabilistic generation model and evaluated it on exactly the same dataset from which the grammar was learned (Eshghi et al., 2013). This was deemed to be methodologically both sounder and cleaner than, say, use of a manually constructed grammar. We also believe that the probabilistic model we have contributed here will generalise to larger, more complex datasets when wider-coverage grammars becomes available. We leave this for future work. (2) Perhaps more importantly, we have no comparative evaluation, and this in a climate where neural NLG has seen astonishing advances in the work on Transformer-based (large) Language Models. To carry out this comparative evaluation, we need to integrate our model with a downstream, and, ideally, multimodal dialogue task (see e.g. Yu et al. (2016, 2017) for how DS-TTR can be integrated within a visually grounded task). This requires substantial further work which is our next step. 5.2 Why a grammar-based approach? It might reasonably be asked why we are using a grammar-based approach in the age of Large Lan\fguage Models (LLM) such as GPT-4 and a large number of other, open source models following. These models are astonishing few-shot learners, and have recently achieved great successes that few thought possible (e.g. in open-domain dialogue, conversational question answering, essay writing, summerisation, translation etc), and are changing the human world in ways that we have not yet had time to grasp. Nevertheless, for the moment, the fact remains that: (a) these models are extremely costly to train and run due their sheer size and the amount of resources (data, compute power, energy) needed to train them; it\u2019s also been demonstrated, time and again, that they have poor compositional generalisation properties (see Pantazopoulos et al. (2022); Nikolaus et al. (2019) among others), which explains much of their characteristic data inefficiency; (b) they are very difficult to control and/or adapt while often producing factually incorrect statements, commonly referred to as hallucinations (Rashkin et al., 2021; Dziri et al., 2022) using very convincing language \u2013 this extends to confident prediction of erroneous actions or plans in multi-modal, embodied settings; (d) they are very hard to sufficiently verify, making them unsuitable for use in safety-critical domains such as healthcare; (e) particularly important for us here, unlike recurrent models such as RNNs and LSTMs, standard Transformer-based neural architectures (Vaswani et al., 2017) are not properly incremental \u2013 even the auto-regressive variants such as GPT \u2013 in the sense that they process word sequences as whole, rather than word by word; they can be run under an \u2018incremental interface\u2019 (Madureira and Schlangen, 2020; Rohanian and Hough, 2021) where input is reprocessed from the beginning with every new token, but even then, they exhibit poor incremental performance with unstable output compared to e.g. LSTMs (Madureira and Schlangen, 2020). Interesting recent work has explored using Linear Transformers (Katharopoulos et al., 2020) with recurrent memory to properly incrementalise LMs (Kahardipraja et al., 2021a), but this work is as yet in its infancy, and we do not yet know of any work that integrates LMs end to end within a real-time, incremental dialogue system. On the other hand, grammar-based approaches have the advantage of being highly controllable and transparent; but crucially, they incorporate the very large wealth of linguistic knowledge that has arisen from decades of linguistics and semantics research. This knowledge has been demonstrated to be a very effective source of inductive bias in grammar-based models which in turn translates to remarkable generalisation potential, and thus also data efficiency (see e.g. Mao et al. (2021) for a CCG-based multi-modal model, and Eshghi et al. (2017) for a DS-TTR-based one) \u2013 see Eshghi et al. (2022) for an extended discussion. One common criticism is that grammar-based models are brittle. This is often true, but we do not believe this to be a fundamental property, and think that specific grammars of a language are adaptable and learnable from interaction. But much work remains to be done to demonstrate this property. For these reasons, we believe that grammarbased approaches hold promises that are as yet unfulfilled, and are therefore still worth exploring in parallel to the much needed work on making LM architectures and training regimes more incremental (see Kahardipraja et al. (2021b, 2023)). 6 Conclusion We developed the first semantic, probabilistic model of real-time language generation using the Dynamic Syntax framework. The results show that the model performs well, even though we evaluated it only on a small dataset. We also demonstrated the zero-shot generalisation ability of the model to generate self-repairs where none were observed during training. To our knowledge, this is the first model capable of reacting to realtime changes to the generation goal by generating suitable self-corrections. This ability is essential in dialogue systems in highly dynamic contexts or environments. Our generation model can be seamlessly integrated into incremental dialogue system architectures (e.g. based on Schlangen and Skantze (2009)). This work further highlights the generalisation power of grammar-based approaches, and lays the foundations for creating conversational AI systems that are controllable, data-efficient, more naturally interactive, and more accessible to people with cognitive impairments. Acknowledgements We are very grateful to Tanvi Dinkar and Julian Hough for some of the ideas in this paper and subsequent discussion. We would also like to thank the SemDial reviewers whose constructive critique lead to further changes and elaboration."
+ },
+ {
+ "url": "http://arxiv.org/abs/1709.07858v1",
+ "title": "Bootstrapping incremental dialogue systems from minimal data: the generalisation power of dialogue grammars",
+ "abstract": "We investigate an end-to-end method for automatically inducing task-based\ndialogue systems from small amounts of unannotated dialogue data. It combines\nan incremental semantic grammar - Dynamic Syntax and Type Theory with Records\n(DS-TTR) - with Reinforcement Learning (RL), where language generation and\ndialogue management are a joint decision problem. The systems thus produced are\nincremental: dialogues are processed word-by-word, shown previously to be\nessential in supporting natural, spontaneous dialogue. We hypothesised that the\nrich linguistic knowledge within the grammar should enable a combinatorially\nlarge number of dialogue variations to be processed, even when trained on very\nfew dialogues. Our experiments show that our model can process 74% of the\nFacebook AI bAbI dataset even when trained on only 0.13% of the data (5\ndialogues). It can in addition process 65% of bAbI+, a corpus we created by\nsystematically adding incremental dialogue phenomena such as restarts and\nself-corrections to bAbI. We compare our model with a state-of-the-art\nretrieval model, MemN2N. We find that, in terms of semantic accuracy, MemN2N\nshows very poor robustness to the bAbI+ transformations even when trained on\nthe full bAbI dataset.",
+ "authors": "Arash Eshghi, Igor Shalyminov, Oliver Lemon",
+ "published": "2017-09-22",
+ "updated": "2017-09-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "I.2.7"
+ ],
+ "main_content": "Introduction There are currently several key problems for the practical data-driven (rather than hand-crafted) development of task-oriented dialogue systems, 1Dataset available at https://bit.ly/babi_plus among them: (1) large amounts of dialogue data are needed, i.e. thousands of examples in a domain; (2) this data is usually required to be annotated with task-speci\ufb01c semantic/pragmatic information for the domain (e.g. various dialogue act schemes); and (3) the resulting systems are generally turn-based, and so do not support natural spontaneous dialogue which is processed incrementally, word-by-word, with many characteristic phenomena that arise from this incrementality. In overcoming issue (2), a recent advance made in research on (non-task) chat dialogues has been the development of so-called \u201cend-to-end\" systems, in which all components are trained from textual dialogue examples, e.g. (Sordoni et al., 2015; Vinyals and Le, 2015). However, as Bordes and Weston (2017) argued, these end-to-end methods may not transfer well to task-based settings (where the user is trying to achieve a domain goal, such as booking a \ufb02ight or \ufb01nding a restaurant, resulting in an API call). Bordes and Weston (2017) then presented an end-to-end method using Memory Networks (memn2ns), which achieves 100% performance on a test-set of 1000 dialogues, after being trained on 1000 dialogues. This method processes dialogues turn-by-turn, and so does not have the advantages of more natural incremental systems (Aist et al., 2007; Skantze and Hjalmarsson, 2010); nor does it really perform language generation, rather it\u2019s based on a retrieval model that selects from a set of candidate system responses seen in the data. This paper investigates an approach to these challenges dubbed babble using an incremental, semantic parser and generator for dialogue (Eshghi et al., 2011; Eshghi, 2015), based around the Dynamic Syntax grammar formalism (DS, Kempson et al. (2001); Cann et al. (2005)). Our advance in this paper, for end-to-end systems, is therefore twofold: (a) the babble method arXiv:1709.07858v1 [cs.CL] 22 Sep 2017 \fovercomes the requirement for large amounts of dialogue data (i.e. 1000s of dialogues in a domain); (b) resulting systems are word-by-word incremental, in both parsing, generation and dialogue management. We show that using only 5 example dialogues from the bAbI, Task 1 dataset (i.e. 0.13% of the training data used by (Bordes et al., 2017)) babble can automatically induce dialogue systems which process 74% of the bAbI testset in an incremental manner. We then introduce an extended incremental version of the bAbI dataset, which we call bAbI+ (see section 4.1), which adds some characteristic incremental phenomena such as mid-utterance self-corrections to the bAbI dialogues (this new dataset is freely available). Using this, we demonstrate that the babble system can in addition generalise to, and process 65% of the bAbI+ dataset, still when trained only on 5 dialogues from bAbI. We compare this method to (Bordes et al., 2017)\u2019s memn2n, which, in terms of semantic accuracy (re\ufb02ected in how well api-calls are predicted at the end of bAbI Task 1), shows very poor robustness to the bAbI+ transformations, even when it is trained on the full bAbI dataset. This overall method is portable to other taskbased domains. Furthermore, as we use a semantic parser, the semantic/contextual representations of the dialogue can be used directly for large-scale inference, required in more complex tasks (e.g. interactive QA and search). 1.1 Dimensions of Pragmatic Synonymy There are two important dimensions along which dialogues can vary, but nevertheless, lead to identical contexts: interactional, and lexical. Interactional synonymy is analogous to syntactic synonymy when two distinct sentences are parsed to identical logical forms except that it occurs not only at the level of a single sentence, but at the dialogue or discourse level. Fig. 1 shows examples of interactional variants that lead to very similar \ufb01nal contexts, in this case, that the user wants to buy an LG phone. These dialogues can be said to be pragmatically synonymous for this domain. Arguably, a good computational model of dialogue processing, and interactional dynamics should be able to capture this synonymy. Lexical synonymy relations, on the other hand, hold among utterances, or dialogues, when di\ufb00erent words (or sequences of words) express meanings that are su\ufb03ciently similar in a particular domain. What is striking about lexical synonymy relations is that unlike syntactic/interactional ones, they can often break down when one moves to another domain: lexical synonymy relations are domain speci\ufb01c. Eshghi & Lemon (2014) developed a method similar in spirit to Kwiatkowski et al. (2013) for capturing lexical synonymy relations by creating clusters of semantic representations based on observations that they give rise to similar or identical extra-linguistic actions observed within a domain (e.g. a data-base query, a \ufb02ight booking, or any API call). Distributional methods could also be used for this purpose (see e.g. Lewis & Steedman (2013)). In general, this kind of clustering is achieved when the domain-general semantics resulting from semantic parsing is grounded in a particular domain. We note that while interactional synonymy relations in dialogue should be accounted for by semantic grammars or formal models of dialogue structure (such as DS-TTR (Eshghi et al., 2012), or KoS (Ginzburg, 2012)), lexical synonymy relations have to be learned from data. 2 Why a grammar-based approach? Recent end-to-end data-driven machine learning approaches treat dialogue as a sequence-tosequence generation problem, and train their models from large datasets e.g. (Sordoni et al., 2015; Wen et al., 2016b,a; Vinyals and Le, 2015). The systems resulting from these types of approach are in principle able to handle variations/patterns that they have encountered (su\ufb03ciently often) in the training data, but not beyond. This large-data constraint is problematic for developers but is also strange when we consider the structural knowledge that we have about language and dialogue that can be encoded in grammars and computational models of interaction. Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one example is enough from which to learn (e.g. Li et. al (2006)). Furthermore, as these systems do not parse to logical forms, they do not allow for explicit inference, which further limits their application. We therefore develop a method combining learning from data with an incremental semantic grammar of dialogue that is able to generalise from small number of observations in a domain \u2013 \fInteractional Variation USR: I would like an LG laptop sorry uhm phone SYS: okay. USR: I would like a phone by LG. SYS: sorry a what? USR: a phone by LG. SYS: okay. SYS: what would you like? USR: an LG phone SYS: okay. SYS: what would you like? USR: a phone SYS: by which brand? USR: LG SYS: okay SYS: you\u2019d like a ...? USR: a phone SYS: by what brand? USR: LG. SYS: okay SYS: so would you like a computer? USR: no, a phone. SYS: okay. by which brand? USR: LG. SYS: okay. Figure 1: Some Interactional Variations in a Shopping Domain in fact even from just a few examples of successful dialogues \u2013 to a large range of interactional and syntactic variations, including everyday natural incremental phenomena. 3 Inducing Dialogue Systems Our overall method involves incrementally parsing dialogues, and encoding the resulting semantics as state vectors in a Markov Decision Process (MDP), which is then used for Reinforcement Learning (RL) of word-level actions for system output (i.e. a combined incremental DM and NLG module for the resulting dialogue system). 3.1 Dynamic Syntax and Type Theory with Records (DS-TTR) Dynamic Syntax (DS) is an action-based, wordby-word incremental and semantic grammar formalism (Kempson et al., 2001; Cann et al., 2005), especially suited to the highly fragmentary and context-dependent nature of dialogue. In DS, words are conditional actions semantic updates; and dialogue is modelled as the interactive and incremental construction of contextual and semantic representations (Eshghi et al., 2015) see Fig. 2. The contextual representations a\ufb00orded by DS are of the \ufb01ne-grained semantic content that is jointly negotiated/agreed upon by the interlocutors, as a result of processing questions and answers, clari\ufb01cation interaction, acceptances, self/other-corrections, restarts, and other characteristic incremental phenomena in dialogue see 3 for a sketch of how self-corrections and restarts are processed via a backtrack and search mechanism over the parse search graph (see Hough (2011); Hough and Purver (2014); Eshghi et al. (2015) for details of the model, and how this parse search graph is e\ufb00ectively the context of the conversation). Generation/linearisation in DS is de\ufb01ned using trialand-error parsing (see Section 3.2, with the provision of a generation goal, viz. the semantics of the utterance to be generated. Generation thus proceeds, just as with parsing, on a word-by-word basis (see Purver et al. (2014); Hough (2015) for details). The upshot of this is that using DS, we can not only track the semantic content of some current turn as it is being constructed (parsed or generated) word-by-word, but also the context of the conversation as whole, with the latter also encoding the grounded/agreed content of the conversation (see e.g. Fig. 4, and see Eshghi et al. (2015); Purver et al. (2010) for details). Crucially for our model below, the inherent incrementality of DS-TTR together with the word-level, as well as cross-turn, parsing constraints it provides, enables the word-by-word exploration of the space of grammatical dialogues, and the semantic and contextual representations that result from them. Type Theory with Records (TTR) is an extension of standard type theory shown to be useful in semantics and dialogue modelling (Cooper, 2005; Ginzburg, 2012). To accommodate dialogue processing, and allow for richer representations of the dialogue context recent work has integrated DS and the TTR framework to replace the logical formalism in which meanings are expressed (Purver et al., 2010, 2011; Eshghi et al., 2012). In TTR, logical forms are speci\ufb01ed as record types (RTs), sequences of \ufb01elds of the form [l : T ] containing a label l and a type T. RTs can be witnessed (i.e. judged as true) by records of that type, where a record is a sequence of label-value pairs [l = v], and [l = v] is of type [l : T ] just in case v is of type T (see Fig. 2 for example record types). Importantly for us here, the standard subtype relation \u2291can be de\ufb01ned for record types: R1 \u2291R2 if for all \ufb01elds [l : T2 ] in R2, R1 contains [l : T1 ] where T1 \u2291T2. A record type can thus be inde\ufb01nitely extended, and is therefore always underspeci\ufb01ed by de\ufb01nition. This allows for incrementally growing meanings to be expressed in a natural way as more words are parsed or generated in \f\" event : es p1=today(event) : t # 7\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 event=arrive : es p1=today(event) : t p2=pres(event) : t x=robin : e p3=subj(event,x) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb 7\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 event=arrive : es p1=today(event) : t p2=pres(event) : t x=robin : e p3=subj(event,x) : t x1 : e p3=from(event,x1) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb 7\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 event=arrive : es p1=today(event) : t p2=pres(event) : t x=robin : e p=subj(event,x) : t x1=S weden : e p3= from(event,x1) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u201cA: Today\u201d 7\u2192 \u201c..Robin arrives\u201d 7\u2192 \u201cB: from?\u201d 7\u2192 \u201cA: Sweden\u201d Figure 2: Incremental parsing using DS-TTR Figure 3: DS-TTR: Incremental Parsing of self-corrections and restarts turn. In addition, as will become clear below, this subtype checking operation is the key mechanism used in our system below for feature checking. 3.2 Overall Method: babble In this section we describe our method for combining incremental dialogue parsing with Reinforcement Learning for Dialogue Management (DM) and Natural Language Generation (NLG) where these are treated as a joint decision/optimisation problem. We start with two resources: a) a DS-TTR parser DS (either learned from data (Eshghi et al., 2013a), or constructed by hand), for incremental language processing, but also, more generally, for tracking the context of the dialogue using Eshghi et al.\u2019s model of feedback (Eshghi et al., 2015; Eshghi, 2015; Eshghi et al., 2011); b) a set D of transcribed successful dialogues in the target domain. We perform the following steps overall to induce a fully incremental dialogue system from D: 1. Automatically induce the MDP state space, S , and the dialogue goal, GD, from D; 2. Automatically de\ufb01ne the state encoding function F : C \u2192S ; where s \u2208S is a (binary) state vector, designed to extract from the current context of the dialogue, the semantic features observed in the example dialogues D; and c \u2208C is a DS context, viz. a pair of TTR Record Types: \u27e8cp, cg\u27e9, where cp is the content of the current, PENDING clause as it is being constructed, but not necessarily fully grounded yet; and cg is the content already jointly built and GROUNDED by the interlocutors (loosely following the DGB model of (Ginzburg, 2012)). 3. De\ufb01ne the MDP action set as the DS lexicon L (i.e. actions are words); 4. De\ufb01ne the reward function R as reaching GD, while minimising dialogue length. We then solve the generated MDP using Reinforcement Learning, with a standard Qlearning method, implemented using BURLAP (McGlashan, 2016): train a policy \u03c0 : S \u2192L, where L is the DS Lexicon, and S the state space induced using F. The system is trained in interaction with a (semantic) simulated user, also automatically built from the dialogue data and described in the next section. The state encoding function, F As shown in \ufb01gure 4 the MDP state is a binary vector of size 2 \u00d7 |\u03a6|, i.e. twice the number of the RT features. The \ufb01rst half of the state vector contains the grounded features (i.e. agreed by the participants) \u03c6i, while the second half contains the current semantics being incrementally built in the current dialogue utterance. Formally: s = \u27e8F1(cp), . . . , Fm(cp), F1(cg), . . . , Fm(cg)\u27e9; where Fi(c) = 1 if c \u2291\u03c6i, and 0 otherwise. (Recall that \u2291is the RT subtype relation). 3.2.1 Semantic User Simulation The simulator is in charge of two key tasks during training: (1) generating user turns in the right dialogue contexts; and (2) word-by-word monitoring of the utterance so far generated by the sys\fGrounded Semantics Current Turn Semantics Dialogue so far \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 x2 : e e2=like : es x1=US R : e p2=pres(e2) : t p5=subj(e2,x1) : t p4=obj(e2,x2) : t p11=phone(x2) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 x2 : e e2=like : es x1=US R : e p2=pres(e2) : t p5=subj(e2,x1) : t p4=obj(e2,x2) : t p11=phone(x2) : t x3 : e p10=by(x2,x3) : t p9=brand(x3) : t p10=question(x3) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb SYS: What would you like? USR: a phone SYS: by which brand? RT Feature: \" x10 : e p15=brand(x10) : t #\" e3=like : es p2=pres(e3) : t #\uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 x10 : e x8 : e p14=by(x8,x10) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 e3=like : es x5=usr : e p7=subj(e3,x5) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 x8 : e e3=like : es p6=obj(e3,x8) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb F1 \u2193 F2 \u2193 F3 \u2193 F4 \u2193 F5 \u2193 State: \u001c Current Turn: 1, 1, 1, 1, 1, \u001d Grounded: 0, 1, 0, 1, 1 Figure 4: Semantics to MDP state encoding with RT features tem during exploration (i.e. babbling grammatical word sequences) by the system. To exploit (and evaluate) the full generalisation properties of the DS dialogue model, both (1) and (2) use the full machinery of the DS parser, as well the state encoding function F, described above. They are thus performed based on the semantic context of the dialogue so far, as tracked by DS (rather than, e.g. being based on string or template matching). Since this includes not just the semantic features of the current turn, but also of the history of the conversation, our simulator respects the turn orderings encountered in the data, i.e. it is sensitive to the order in which information is gathered from the user. The rules required for (1) & (2) are extracted automatically from the raw dialogue data, D, using DS and F. The dialogues in D are parsed and encoded using F incrementally. For (1), all the states that trigger the user into action, si = F(c) \u2013 where c is a DS context \u2013 immediately prior to any user turn are recorded, and mapped to what the user ends up saying in those contexts for more than one training dialogue there may be more than one candidate (in the same context/state). The rules thus extracted will be of the form: strig \u2192{u1, . . . , un}, where ui are user turns. Now note that the si\u2019s prior to the user turns also immediately follow system turns. And thus to perform (2), i.e. to monitor the system\u2019s behaviour during training, we only need to check further that the current state resulting from processing a word generated by the system, subsumes is extendible to one of the si. We perform this through a simple bitmask operation (recall that the states are binary). The simulation can thus semantically identify erroneous/out-of-domain actions (words) by the system. It would then terminate the learning episode and penalise the system immediately, aiding speed of training signi\ufb01cantly. 4 Evaluation We have so far induced two prototype dialogue systems, one in an \u2018electronics shopping\u2019 domain (see Kalatzis et al. (2016) and Fig. 1) and another in a \u2018restaurant-search\u2019 domain, showing that fully incremental dialogue systems can be automatically induced from small amounts of unannotated dialogue transcripts (Kalatzis et al., 2016; Eshghi et al., 2017) in this case both systems were bootstrapped from a single successful example dialogue. We are in the process of evaluating these systems with real users. In this paper, however, our focus is not on building dialogue systems per se, but on: (1) studying and quantifying the interactional and structural generalisation power of the DS-TTR grammar formalism (see Section 2), and that of symbolic, grammar-based approaches to language processing more generally. We focus here on speci\ufb01c dialogue phenomena, such as mid-sentence self-corrections, hesitations, and restarts (see below); (2) doing the same for Bordes and Weston\u2019s (2017) state-of-the-art, bottom up response retrieval model, without use of linguistic knowledge of any form; and (3) comparing (1) and (2). In order to test and quantify the interactional \fand structural generalisation power/robustness of the two models, babble and memn2n, we need contrasting dialogue data-sets that control for interactional vs. lexical variations in the input dialogues. Furthermore, to make our results comparable to the existing approach of Bordes and Weston (2017), we need to use the same dataset that they have used. We therefore use Facebook AI Research\u2019s bAbI dialogue tasks dataset (Bordes et al., 2017). These are goal-oriented dialogues in the domain of restaurant search. Here we tackle Task 1, where in each dialogue the system asks the user about their preferences for the properties of a restaurant, and each dialogue results in an API call which contains values of each slot obtained. Other than the explicit API call notation, there are no annotations in the data whatsoever. 4.1 The bAbI+ dataset While containing some lexical variation, the original bAbI dialogues signi\ufb01cantly lack interactional variation vital for natural real-life dialogue. In order to obtain such variation while holding lexical variation constant, we created the bAbI+ dataset by systematically transforming the bAbI dialogues. bAbI+ is an extension of the original bAbI Task 1 dialogues with everyday incremental dialogue phenomena (hesitations, restarts, and corrections \u2013 see below). While the original bAbI tasks 2\u20147 increase the user\u2019s goal complexity, modi\ufb01cations introduced in bAbI+ can be thought of as orthogonal to this: we instead increase the complexity of surface forms of dialogue utterances, while keeping every other aspect of the task \ufb01xed. The variations introduced in bAbI+ are: 1. Hesitations, e.g. as in \u201cwe will be uhm eight\u201d; 2. Restarts, e.g. \u201ccan you make a restaurant uhm yeah can you make a restaurant reservation for four people with french cuisine in a moderate price range\u201d; and 3. Corrections a\ufb00ecting task-speci\ufb01c information both short-distance ones correcting one token, e.g. \u201cwith french oh no spanish food\u201d, and long-distance NP/PP-level corrections, e.g. \u201cwith french food uhm sorry with spanish food\u201d. The phenomena above are mixed in probabilistically from the \ufb01xed sets of templates to the original data2. The modi\ufb01cations a\ufb00ect a total of 11336 2See https://github.com/ishalyminov/babi_ tools utterances in the 3998 dialogues. Around 21% of user turns contain corrections, 40% hesitations, and 5% restarts (they are not mutually exclusive, so that an utterance can contain up to 3 modi\ufb01cations). Our modi\ufb01cations, with respect to corrections in particular, are more conservative than those observed in real-world data: Hough (2015) reports that self-corrections appear in 20% of all turns of natural conversations from the British National Corpus, and in 40% of turns in the Map Task, a corpus of human-human goal-oriented dialogues. Here\u2019s part of an example dialogue in the bAbI+ corpus: sys: hello what can I help you with today? usr: I\u2019d like to book a uhm yeah I\u2019d like to book a table in a expensive price range sys: I\u2019m on it. Any preference on a type of cuisine? usr: with indian food no sorry with spanish food please 4.2 Memory Network setup In all the experiments we describe below, we follow Bordes and Weston\u2019s setup by using a memn2n (we took an open source Tensor\ufb02ow implementation for bAbI QA tasks and modi\ufb01ed it3 according to their setup \u2013 see details below). In order to adapt the data for the memn2n, we transform the dialogues into triplets. The number of triplets for a single dialogue is equal to the number of the system\u2019s turns, and in each triplet, the answer is the current system\u2019s turn, the question is the user\u2019s turn preceding it, and the story is a list of all the previous turns among both sides. The memn2n hyperparameters are set as follows: 1 hop, and 128 as the size of embeddings; we train it for 100 epochs with a learning rate of 0.01 and a batch size of 8 \u2013 in this we follow the best bAbI Task 1 setup reported by (Bordes et al., 2017). 4.3 Testing the DS-TTR parser Dynamic Syntax (DS) lexicons are learnable from data (Eshghi et al., 2013a,b). But since the lexicon was induced from a corpus of child-directed utterances in this prior work, there were some constructions as well as individual words that it did not include4. One of the authors therefore extended this induced grammar manually to cover the bAbI dataset, which, despite not being very diverse, 3See https://github.com/ishalyminov/memn2n 4We are currently looking into applying Eshghi et al.\u2019s (2013a) model to induce DS grammars from larger semantic corpora such as the Groningen Meaning Bank, leading to much more wide-coverage lexicons \fcontains a wide range of complex grammatical constructions, such as long sequences of prepositional phrases, adjuncts, short answers to yes/no and wh-questions, appositions of NPs, causative verbs etc. We parsed all dialogues in the bAbI train and test sets, as well as on the bAbI+ corpus word-byword, including both user and system utterances, in context. The grammar parses 100% of the dialogues, i.e. it does not fail on any word in any of the dialogues. We assess the semantic accuracy of the parser on bAbI & bAbI+ using the dialogue\ufb01nal api-calls in section 4.5 below. 4.4 Experiment 1: Generalisation from small data We have now set out all we need to perform the \ufb01rst experiment. Our aim here is to assess the generalisation power that results from the grammar and our state encoding method (section 3.1) we dub our overall model babble and compare this to the state of the art results of Bordes et al. (2017). The method in Bordes et al. (2017) is not generative, rather it is based on retrieval of system responses, based on the history of the dialogue up to that point. Therefore, for direct comparison, and for simplicity of exposition, we do the same here: we apply the method described for creating a user simulation (section 3.2.1), this time for the system side, resulting in a \u2018system simulation\u2019. We then use this to predict a system response, by parsing and encoding the containing test dialogue up to the point immediately prior to the system turn. This results in a triggering state, strig, which is then used as the key to look up the system\u2019s response from the rules constructed as per section 3.2.1. The returned response is then parsed wordby-word as normal, and this same process continues for the rest of the dialogue. This method uses the full machinery of DS-TTR & our stateencoding method the babble model and will thus re\ufb02ect the generalisation properties that we are interested in. Cross-Validation Since we are here interested in data e\ufb03ciency and generalisation we use all the bAbI and bAbI+ data the train, dev, and test sets as follows: we train Bordes & Weston\u2019s memn2n and babble from 1-5 examples selected at random from the longest dialogues in bAbI \u2013 note bAbI+ data is never used for training in these experiments. This process is repeated across 10 folds. The models are then tested on sets of 1000 examples selected at random, in each fold. Both the training and test sets constructed in this way are kept constant in each fold across the babble & memn2n models. The test sets are selected either exclusively from bAbI or exclusively from bAbI+. 4.4.1 Results: Predicting system turns Table 1 shows per utterance accuracies for the babble & memn2n models. Per utterance accuracy is the percentage of all system turns in the test dialogues that were correctly predicted. The table shows that babble can generalise to a remarkable 74% of bAbI and 65% of bAbI+ with only 5 input dialogues from bAbI. It also shows that memn2ns can also generalise remarkably well. Although as discussed below, this result is misleading on its own as the memn2ns are very poor at generating the \ufb01nal api-calls correctly on both the bAbI & bAbI+ data, and are thus making too many semantic mistakes. 4.5 Experiment 2: Semantic Accuracy The results from Experiment 1 on their own can be misleading, as correct prediction of system responses does not in general tell us enough about how well the models are interpreting the dialogues, or whether they are doing this with a suf\ufb01cient level of granularity. To assess this, in this second experiment, we measure the semantic accuracy of each model by looking exclusively at how accurately they predict the \ufb01nal api-calls in the bAbI & bAbI+ datasets. For the memn2n model, we follow the same overall procedure as in the previous experiment: train on bAbI data, and test on bAbI+. 4.5.1 Results: Prediction of api-calls BABBLE Mere successful parsing of all the dialogues in the bAbI and bAbI+ datasets as shown above doesn\u2019t mean that the semantic representations compiled for the dialogues were in fact correct. To measure the semantic accuracy of the DSTTR parser we programmatically checked that the correct slot values \u2013 those in the api-call annotations \u2013 were in fact present in the semantic representations produced by the parser for each dialogue (see Fig. 2 for example semantic representations). We further checked that there is no other incorrect slot value present in these representations. The results showed that the parser has 100% se\f# of training dialogues: 1 2 3 4 5 babble on bAbI 67.12 73.36 72.63 73.32 74.08 memn2n on bAbI 2.77 59.15 70.94 71.68 72.6 babble on bAbI+ 59.42 65.27 63.45 64.34 65.2 memn2n on bAbI+ 0.22 56.75 68.65 71.84 73.2 Table 1: Mean per utterance accuracies (%) for memn2n & babble models across the bAbI & bAbI+ datasets (10 folds) mantic accuracy on both bAbI and bAbI+5. This result is not surprising, given that DS-TTR is a general model of incremental language processing, including phenomena such as self-corrections and restarts (see Hough (2015) for details of the model). MEMN2N Given just 1 to 5 training instances from bAbI as in the previous experiment, the mean api-call prediction accuracy of the memn2n model is nearly 0 on both bAbI and bAbI+. This is not at all unexpected, since prediction of the api-calls is a generative process, unlike the prediction of system turns which can be done on a retrieval/look-up basis alone. For this, the model needs to observe the di\ufb00erent word sequences that might determine each parameter (slot) value, and observe them with su\ufb03cient frequency and variation. This is unlike a semantic parser like DSTTR, that produces semantic representations for the dialogues as a result of the structural, linguistic knowledge that it embodies. Nevertheless, we were also interested in the general semantic robustness of the memn2n model, to the transformations in bAbI+, i.e. how well does the memn2n model interpret bAbI+ dialogues, when trained on the full bAbI dataset? Does it then learn to generalise to (process) the bAbI+ dialogues with su\ufb03cient semantic accuracy? Table 2 shows that we can fully replicate the results reported in Bordes et al. (2017): the memn2n model can predict the api-calls with 100% accuracy, when trained on the bAbI train-set and tested on the bAbI test-set. But when this same model is tested on bAbI+, the accuracy drops to a 5A helpful reviewer points out that the DS-TTR setup is a carefully tuned rule-based system, thus perhaps rendering these results trivial. But we note that the results here are not due to ad-hoc constructions of rules/lexicons, but due to the generality of the grammar model, and its attendant incremental, left-to-right properties; and that the same parser can be used in other domains. Furthermore, the ability to process self-corrections, restarts, etc. \u201ccomes for free\u201d, without the need to add or posit new machinery testing con\ufb01guration accuracy memn2n on bAbI 100 memn2n on bAbI+ 28 Table 2: api-call prediction accuracies (%) for the memn2n model trained on the bAbI trainset very poor 28%, making any dialogue system built using this model unusable in the face of natural, spontaneous dialogue data. This is further discussed below. 5 Discussion 5.1 babble The method described above has the following advantages over previous approaches to dialogue system development: \u2013 incremental (and thus more natural) language understanding, dialogue management, and generation; \u2013 \u201cend-to-end\" method for task-based systems: no Dialogue Act annotations are required (i.e. reduced development time and e\ufb00ort); \u2013 a complete dialogue system for a new task can be automatically induced, using only \u2018raw\u2019 data \u2013 i.e. successful dialogue transcripts; \u2013 the MDP state and action spaces are automatically induced, rather than having to be designed by hand (as in prior work); \u2013 wide-coverage, task-based dialogue systems can be built from much smaller amounts of data as shown in section 4 . This \ufb01nal point bears further examination. As an empirically adequate model of incremental language processing in dialogue, the DS-TTR grammar is required to capture interactional variants such as question-answer pairs, overand underanswering, selfand other-corrections, clari\ufb01cation, split-utterances, and ellipsis more generally. As we showed in section 4, even if most of these structures are not present in the training exam\fple(s), the resulting trained system is able to handle them, thus resulting in a very signi\ufb01cant generalisation around the original data. We also note that since we were in this instance interested in a direct comparison with memn2ns over the bAbI & bAbI+ datasets, we didn\u2019t exploit the power of Reinforcement Learning and exploration as we described above as we have done before with other systems (Kalatzis et al., 2016). Therefore the generalisation results we report above for babble follow entirely from the knowledge present within the grammar as a computational model of dialogue processing and contextual update, rather than this having been learned from data. Applying the full RL method described above would have meant that the system would actually discover many interactional and syntactic variations that are not present in bAbI, nor in bAbI+. 5.2 memn2n Even when trained on very few training instances, the memn2n model was able to predict system responses remarkably well. But results from Experiment 2 above showed that this was misleading: the memn2ns were making a drastic number of semantic mistakes when interpreting the dialogues, both in the bAbI and bAbI+ datasets. Even when trained on the full bAbI data-set, the model performed badly on bAbI+ in terms of semantic accuracy. We diagnose these results as follows: Problem complexity: The \ufb01rst thing to notice is that in bAbI dialogue Task 1, the responses are highly predictable and stay constant regardless of the actual task details (slot values) up to the point of the \ufb01nal api-calls; and further, that the prediction of api-calls is a generative process, unlike the prediction of the system turns, which is retrieval-based. This, in our view, explains the very large di\ufb00erence in memn2n performance across the two prediction tasks. Model robustness to the bAbI+ transformations:. The variations introduced in bAbI+ are repetitions of both content and non-content words, as well additional incorrect slot values. The model was working in the same setup as babble, therefore none of those variations could be treated as unknown tokens for either system. Although in the case of memn2n, some of the mixed-in words never appeared in the training data, and bAbI+ utterances were augmented signi\ufb01cantly with those words \u2013 so it was interesting to see how such untrained embeddings would a\ufb00ect the latent memory representations inside memn2n. The resulting performance suggests that there was no signi\ufb01cant impact on memn2n from these variations as far as the predicting system responses was concerned. But the incorrect slot values introduced in self-corrections a\ufb00ect the system\u2019s task completion performance signi\ufb01cantly, only appearing at the point of api-call predictions. We note also that none of our experiments in this paper involved training memn2n on bAbI+ data. There is a very interesting question here: is the memn2n model in principle able to learn to process the bAbI+ structures if it is in fact trained on it? And how much bAbI+ data would it require to do so? These issues are address in detail in Shalyminov et al. (2017). 6 Conclusions Our main advances are in a) training end-to-end dialogue systems from small amounts of data, b) incremental processing for wider coverage of more natural everyday dialogues (e.g. containing self-repairs). We compared our grammar-based approach to dialogue processing (DS-TTR) with a stateof-the-art, end-to-end response retrieval model (memn2ns) (Bordes et al., 2017), when training on small amounts of dialogue data. Our experiments show that our model can process 74% of the Facebook AI bAbI dataset even when trained on only 0.13% of the data (5 dialogues). It can in addition process 65% of bAbI+, a corpus we created by systematically adding incremental dialogue phenomena such as restarts and self-corrections to bAbI. We \ufb01nd on the other hand that the memn2n model is not robust to the structures we introduced in bAbI+, even when trained on the full bAbI dataset. Acknowledgements This research is supported by the EPSRC, under grant number EP/M01553X/1 (BABBLE project6). 6https://sites.google.com/site/ hwinteractionlab/babble"
+ }
+ ],
+ "Ioannis Konstas": [
+ {
+ "url": "http://arxiv.org/abs/1704.08381v3",
+ "title": "Neural AMR: Sequence-to-Sequence Models for Parsing and Generation",
+ "abstract": "Sequence-to-sequence models have shown strong performance across a broad\nrange of applications. However, their application to parsing and generating\ntext usingAbstract Meaning Representation (AMR)has been limited, due to the\nrelatively limited amount of labeled data and the non-sequential nature of the\nAMR graphs. We present a novel training procedure that can lift this limitation\nusing millions of unlabeled sentences and careful preprocessing of the AMR\ngraphs. For AMR parsing, our model achieves competitive results of 62.1SMATCH,\nthe current best score reported without significant use of external semantic\nresources. For AMR generation, our model establishes a new state-of-the-art\nperformance of BLEU 33.8. We present extensive ablative and qualitative\nanalysis including strong evidence that sequence-based AMR models are robust\nagainst ordering variations of graph-to-sequence conversions.",
+ "authors": "Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, Luke Zettlemoyer",
+ "published": "2017-04-26",
+ "updated": "2017-08-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "main_content": "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text. As shown in Figure 1, AMR represents the meaning using a directed graph while abstracting away the surface forms in text. AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012), summarization (Liu et al., 2015), sentence compression (Takase et al., 2016), and event extraction (Huang et al., 2016). While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Obama elect.01 celebrate.01 vote.01 and * op1 op2 ARG0 poss ARG0 person name name op1 person ARG0-of Figure 1: An example sentence and its corresponding Abstract Meaning Representation (AMR). AMR encodes semantic dependencies between entities mentioned in the sentence, such as \u201cObama\u201d being the \u201carg0\u201d of the verb \u201celected\u201d. of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016). In this work, we present the \ufb01rst successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation. Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015). However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose signi\ufb01cant challenges. We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to signi\ufb01cantly reduce sparsity. Our approach is two-fold. First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator. More concretely, \ufb01rst we use self-training to arXiv:1704.08381v3 [cs.CL] 18 Aug 2017 \fbootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator. This paired training allows both the parser and generator to learn high quality representations of \ufb02uent English text from millions of weakly labeled examples, that are then \ufb01ne-tuned using human annotated AMR data. Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d). This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs. Under such a representation, we show that any depth \ufb01rst traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example. Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach. For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup. For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU. We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure. 2 Related Work Alignment-based Parsing Flanigan et al. (2014) (JAMR) pipeline concept and relation identi\ufb01cation with a graph-based algorithm. Zhou et al. (2016) extend JAMR by performing the concept and relation identi\ufb01cation tasks jointly with an incremental model. Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules. In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities. Grammar-based Parsing Wang et al. (2016) (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al. (2017), Brandt et al. (2016), Puzikov et al. (2016), and Goodman et al. (2016). Artzi et al. (2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al. (2016). Pust et al. (2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014), and employing several external semantic resources. Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to \ufb01nd and canonicalize named entities. Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017). Similar to our approach, Peng et al. (2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens). However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6). AMR Generation Flanigan et al. (2016b) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system. Pourdamghani et al. (2016) also use an MT decoder; they learn a classi\ufb01er that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder. Song et al. (2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and \ufb01nding the best linearization order. Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph. Finally, all three systems intersect with a large language model trained on Gigaword. We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus. Data Augmentation Our paired training procedure is largely inspired by Sennrich et al. (2016). They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, \fand mixing it with the human translations. We instead pre-train on the external corpus \ufb01rst, and then \ufb01ne-tune on the original dataset. 3 Methods In this section, we \ufb01rst provide the formal de\ufb01nition of AMR parsing and generation (section 3.1). Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4). 3.1 Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a. The AMR is a rooted directed acylical graph. It contains nodes whose names correspond to sense-identi\ufb01ed verbs, nouns, or AMR speci\ufb01c concepts, for example elect.01, Obama, and person in Figure 1. One of these nodes is a distinguished root, for example, the node and in Figure 1. Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1. The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W. We study the task of training an AMR parser, i.e., \ufb01nding a set of parameters \u03b8P for model f, that predicts an AMR graph \u02c6 a, given a sentence s: \u02c6 a = argmax a f \u0000a|s; \u03b8P \u0001 (1) We also consider the reverse task, training an AMR generator by \ufb01nding a set of parameters \u03b8G, for a model f that predicts a sentence \u02c6 s, given an AMR graph a: \u02c6 s = argmax s f \u0000s|a; \u03b8G \u0001 (2) In both cases, we use the same family of predictors f, sequence-to-sequence models that use global attention, but the models have independent parameters, \u03b8P and \u03b8G. 3.2 Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016).1 Our model uses a global attention decoder and unknown word replacement with small modi\ufb01cations (Luong et al., 2015). The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder. We make two modi\ufb01cations to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the \ufb01rst layer of the encoder. The decoder predicts an attention vector over the encoder hidden states using previous decoder states. The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence. The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state. The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence. 3.3 Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens. We de\ufb01ne a linearization order for an AMR graph as any sequence of its nodes and edges. A linearization is de\ufb01ned as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details). Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization. 3.4 Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples. Neural sequence-to-sequence models suffer from sparsity with so few training pairs. To reduce the effect of sparsity, we use an external unannotated corpus of sentences Se, and a procedure which pairs the training of the parser and generator. Our procedure is described in Algorithm 1, and \ufb01rst trains a parser on the dataset D of pairs of sentences and AMR graphs. Then it uses self-training 1We extended the Harvard NLP seq2seq framework from http://nlp.seas.harvard.edu/code. \fAlgorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) \u2208 D, an unannotated external corpus of sentences Se, a number of self training iterations, N, and an initial sample size k. Output: Model parameters for AMR parser \u03b8P and AMR generator \u03b8G. 1: \u03b8P \u2190Train parser on D \u25b7Self-train AMR parser. 2: S1 e \u2190sample k sentences from Se 3: for i = 1 to N do 4: Ai e \u2190Parse Si e using parameters \u03b8P \u25b7Pre-train AMR parser. 5: \u03b8P \u2190Train parser on (Ai e, Si e) \u25b7Fine tune AMR parser. 6: \u03b8P \u2190Train parser on D with initial parameters \u03b8P 7: Si+1 e \u2190sample k \u00b7 10i new sentences from Se 8: end for 9: SN e \u2190sample k \u00b7 10N new sentences from Se \u25b7Pre-train AMR generator. 10: Ae \u2190Parse SN e using parameters \u03b8P 11: \u03b8G \u2190Train generator on (AN e , SN e ) \u25b7Fine tune AMR generator. 12: \u03b8G \u2190Train generator on D using initial parameters \u03b8G 13: return \u03b8P , \u03b8G to improve the initial parser. Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus Se, (2) creating a new set of parameters by training on Se, and (3) \ufb01ne-tuning those parameters on the original paired data. After each iteration, we increase the size of the sample from Se by an order of magnitude. After we have the best parser from self-training, we use it to label AMRs for Se and pre-train the generator. The \ufb01nal step of the procedure \ufb01ne-tunes the generator on the original dataset D. 4 AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modi\ufb01cations we make to sentence-graph pairs. Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) address sparsity from certain open class vocabulary entries, such as named entities (NEs) and quantities. Figure 2(d) contains example inputs and outputs with all of our preprocessing techniques. Graph Simpli\ufb01cation In order to reduce the overall length of the linearized graph, we \ufb01rst remove variable names and the instance-of relation ( / ) before every concept. In case of re-entrant nodes we replace the variable mention with its co-referring concept. Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his. Following Pourdamghani et al. (2016) we also remove senses from all concepts for AMR generation only. Figure 2(a) contains an example output after this stage. 4.1 Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W. 83.4% of them occur fewer than 5 times in the dataset. In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization. First, we anonymize sub-graphs headed by one of AMR\u2019s over 140 \ufb01ne-grained entity types that contain a :name role. This captures structures referring to entities such as person, country, miscellaneous entities marked with *-enitity, and typed numerical values, *-quantity. We exclude date entities (see the next section). We then replace these sub-graphs with a token indicating \ufb01ne-grained type and an index, i, indicating it is the ith occurrence of that type.2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0. On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al. (2014) in order to \ufb01nd mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph. We record this mapping for use during testing of generation models. If a generation model predicts an anonymization token, we \ufb01nd the corresponding token in the AMR graph and replace the model\u2019s output with the most frequent mapping observed during training for the entity name. If the entity was never observed, we copy its name directly from the AMR graph. Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.3 In AMR gener2In practice we only used three groups of ids: a different one for NEs, dates and constants/numbers. 3We also use three date format markers that appear in the text as: YYYYMMDD, YYMMDD, and YYYY-MM-DD. \fUS of\ufb01cials held an expert group meeting in January 2002 in New York. (h / hold-04 :ARG0 (p2 / person :ARG0-of (h2 / have-org-role-91 :ARG1 (c2 / country :name (n3 / name :op1 \u201cUnited\" op2: \u201cStates\u201d)) :ARG2 (o / official))) :ARG1 (m / meet-03 :ARG0 (p / person :ARG1-of (e / expert-01) :ARG2-of (g / group-01))) :time (d2 / date-entity :year 2002 :month 1) :location (c / city :name (n / name :op1 \u201cNew\" :op2 \u201cYork\u201d))) hold :ARG0 person :ARG0-of have-org-role :ARG1 loc_0 :ARG2 official :ARG1 meet :ARG0 person :ARG1-of expert :ARG2-of group :time date-entity year_0 month_0 :location loc_1 hold :ARG0 person :ARG0-of have-org-role :ARG1 country_0 :ARG2 official :ARG1 meet :ARG0 person :ARG1-of expert :ARG2-of group :time date-entity year_0 month_0 :location city_1 hold :ARG0 person :ARG0-of have-org-role :ARG1 country :name name :op1 United :op2 States :ARG2 official :ARG1 meet :ARG0 person :ARG1-of expert :ARG2-of group :time date-entity :year 2002 :month 1 :location city :name name :op1 New :op2 York hold :ARG0 ( person :ARG0-of ( have-org-role :ARG1 loc_0 :ARG2 official ) ) :ARG1 ( meet :ARG0 ( person :ARG1-of expert :ARG2-of group ) ) :time ( date-entity year_0 month_0 ) :location loc_1 US of\ufb01cials held an expert group meeting in January 2002 in New York. country_0 of\ufb01cials held an expert group meeting in month_0 year_0 in city_1. loc_0 of\ufb01cials held an expert group meeting in month_0 year_0 in loc_1. loc_0 of\ufb01cials held an expert group meeting in month_0 year_0 in loc_1. (a) (b) (c) (d) Figure 2: Preprocessing methods applied to sentence (top row) AMR graph (left column) pairs. Sentence-graph pairs after (a) graph simpli\ufb01cation, (b) named entity anonymization, (c) named entity clustering, and (d) insertion of scope markers. ation, we render the corresponding format when predicted. Figure 2(b) contains an example of all preprocessing up to this stage. Named Entity Clusters When performing AMR generation, each of the AMR \ufb01ne-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005): person, location, organization and misc. This reduces the sparsity associated with many rarely occurring entity types. Figure 2 (c) contains an example with named entity clusters. NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data. To produce \ufb01ne-grained named entities, we run the Stanford NER system and \ufb01rst try to replace any identi\ufb01ed span with a \ufb01ne-grained category based on alignments observed during training. If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR. After parsing, we deterministically generate AMR for anonymizations using the corresponding text span. 4.2 Linearization Linearization Order Our linearization order is de\ufb01ned by the order of nodes visited by depth \ufb01rst search, including backward traversing steps. For example, in Figure 2, starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.4 The order traverses children in the sequence they are presented in the AMR. We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above. Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node. (2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it. In case the node has only one child we omit the scope markers (denoted with left \u201c(\u201d, and right \u201c)\u201d parentheses), thus signi\ufb01cantly reducing the number of generated tokens. Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model. 5 Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 Task 8 (LDC2015E86), which contains 16,833/1,368/1,371 train/dev/test examples. For the paired training procedure of Algorithm 1, we use Gigaword as our external corpus and sample sentences that only contain words from the AMR corpus vocabulary W. We subsampled the original sentence to ensure there is no overlap with the AMR training or test sets. Table 2 4Sense, instance-of and variable information has been removed at the point of linearization. \fDev Test Model Prec Rec F1 Prec Rec F1 SBMT (Pust et al., 2015) 69.0 67.1 JAMR (Flanigan et al., 2016a) 69.7 64.5 67.0 CAMR (Wang et al., 2016) 72.3 61.4 66.6 70.4 63.1 66.5 CCG* (Artzi et al., 2015) 67.2 65.1 66.1 66.8 65.7 66.3 JAMR (Flanigan et al., 2014) 64.0 53.0 58.0 GIGA-20M 62.2 66.0 64.4 59.7 64.7 62.1 GIGA-2M 61.9 64.8 63.3 60.2 63.6 61.9 GIGA-200k 59.7 62.9 61.3 57.8 60.9 59.3 AMR-ONLY 54.9 60.0 57.4 53.1 58.1 55.5 SEQ2SEQ (Peng et al., 2017) 55.0 50.0 52.0 CHAR-LSTM (Barzdins and Gosko, 2016) 43.0 Table 1: SMATCH scores for AMR Parsing. *Reported numbers are on the newswire portion of a previous release of the corpus (LDC2014T12). summarizes statistics about the original dataset and the extracted portions of Gigaword. We evaluate AMR parsing with SMATCH (Cai and Knight, 2013), and AMR generation using BLEU (Papineni et al., 2002)5. We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 \u2013 line 1). We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500. Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5. Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8. For the initial parser trained on the AMR corpus, (Algorithm 1 \u2013 line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set. All subsequent models bene\ufb01ted from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes. We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training. For pretraining the parser and generator, (Algorithm 1 \u2013 lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs. We attempt to \ufb01ne-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1. We select the best performing model on 5We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007). Corpus Examples OOV@1 OOV@5 AMR 16833 44.7 74.9 GIGA-200k 200k 17.5 35.3 GIGA-2M 2M 11.2 19.1 GIGA-20M 20M 8.0 12.7 Table 2: LDC2015E86 AMR training set, GIGA-200k, GIGA-2M and GIGA-20M statistics; OOV@1 and OOV@5 are the out-of-vocabulary rates on the NL side with thresholds of 1 and 5, respectively. Vocabulary sizes are 13027 tokens for the AMR side, and 17319 tokens for the NL side. the development set among all of these \ufb01ne-tuning attempts. During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation. 6 Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our \ufb01nal system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences. Through every round of self-training, our parser improves. Our \ufb01nal parser outperforms comparable seq2seq and character LSTM models by over 10 points. While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1. We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning. All other models that we compare against use semantic resources, such as WordNet, dependency \fModel Dev Test GIGA-20M 33.1 33.8 GIGA-2M 31.8 32.3 GIGA-200k 27.2 27.4 AMR-ONLY 21.7 22.0 PBMT* (Pourdamghani et al., 2016) 27.2 26.9 TSP (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016b) 23.0 23.0 Table 3: BLEU results for AMR Generation. *Model has been trained on a previous release of the corpus (LDC2014T12). parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs). Our full models outperform the original version of JAMR (Flanigan et al., 2014), a graph-based model but still lags behind other parser-dependent systems (CAMR6), and resource heavy approaches (SBMT). Generation Results Table 3 summarizes our AMR generation results on the development and test set. We outperform all previous state-of-theart systems by the \ufb01rst round of self-training and further improve with the next rounds. Our \ufb01nal model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus. We leave scaling our models to all of Gigaword for future work. Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2. By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M. Preprocessing Ablation Study We consider the contribution of each main component of our pre6Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wiki\ufb01cation version of the CAMR system. 7We also trained our generator on GIGA-2M and \ufb01netuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement. Model BLEU FULL 21.8 FULL SCOPE 19.7 FULL SCOPE NE 19.5 FULL SCOPE NE ANON 18.7 Table 4: BLEU scores for AMR generation ablations on preprocessing (DEV set). Model Prec Rec F1 FULL 54.9 60.0 57.4 FULL ANON 22.7 54.2 32.0 Table 5: SMATCH scores for AMR parsing ablations on preprocessing (DEV set). processing stages while keeping our linearization order identical. Figure 2 contains examples for each setting of the ablations we evaluate on. First we evaluate using linearized graphs without parentheses for indicating scope, Figure 2(c), then without named entity clusters, Figure 2(b), and additionally without any anonymization, Figure 2(a). Tables 4 summarizes our evaluation on the AMR generation. Each components is required, and scope markers and anonymization contribute the most to overall performance. We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph. We also evaluated the contribution of anonymization to AMR parsing (Table 5). Following previous work, we \ufb01nd that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017). 7 Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders. Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016), that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences. 7.1 Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited. Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d). \fLinearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6: BLEU scores for AMR generation for different linearization orders (DEV set). Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d). Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset. We traverse children based on the position in the global ordering of the edge leading to a child. Random For each example in the dataset we traverse children following a different random order of edge types. 7.2 Results We present AMR generation results for the three proposed linearization orders in Table 6. Random linearization order performs somewhat worse than traversing the graph according to Human linearization order. Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences. Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order. On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR. To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments. We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.8 Of the exam8This is consistent with constraints encoded in the annotation tool used to collect AMR. For example, :ARG0 edges are always ordered before :ARG1 edges. Error Type % Coverage 29 Dis\ufb02uency 23 Anonymization 14 Sparsity 13 Attachment 12 Other 10 Table 7: Error analysis for AMR generation on a sample of 50 examples from the development set. ples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence. The relative ordering of some pairs of AMR edges was particularly indicative of generation order. For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.9 To compare to previous work we still report results using human orderings. However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders. Arguably, our models are agnostic to this choice. 8 Qualitative Results Figure 3 shows example outputs of our full system. The generated text for the \ufb01rst graph is nearly perfect with only a small grammatical error due to anonymization. The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state. The model omits some information from the graph, namely the concepts terrorist and virus. In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert. Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insuf\ufb01cient annotation (thing) thus making their generation harder. 9Consider the sentences \u201cShe went to school in New York two years ago\u201d, and \u201cTwo years ago, she went to school in New York\u201d, where \u201ctwo year ago\u201d is the time modifying constituent for the verb went and \u201cNew York\u201d is the location modifying constituent of went. \fFinally, Table 7 summarizes the proportions of error types we identi\ufb01ed on 50 randomly selected examples from the development set. We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by \ufb02uency mistakes, as illustrated above. Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts. 9 Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus. Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers. We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8). For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al. (2005)). This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and \ufb01nally handles multiple languages10 (Bender, 2014). Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added bene\ufb01t compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel et al., 2016). Acknowledgments The research was supported in part by DARPA under the DEFT program through AFRL (FA8750-13-2-0019) and the CwC program through ARO (W911NF-15-1-0543), the ARO (W911NF-16-1-0121), the NSF (IIS-1252835, IIS-1562364, IIS-1524371), an Allen Distinguished Investigator Award, Samsung GRO, and gifts by Google and Facebook. The authors thank Rik Koncel-Kedziorski, the UW NLP group, and the anonymous reviewers for their thorough and helpful comments. 10A list of actively maintained languages can be found here: http://moin.delph-in.net/ GrammarCatalogue limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains . REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains . COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity ) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity :arg1 thing ) :mod ( refute :polarity :arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes . REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability. COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus . COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza . Figure 3: Linearized AMR after preprocessing, reference sentence, and output of the generator. We mark with colors common error types: dis\ufb02uency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)."
+ }
+ ],
+ "Emanuele Bastianelli": [
+ {
+ "url": "http://arxiv.org/abs/2011.13210v1",
+ "title": "Encoding Syntactic Constituency Paths for Frame-Semantic Parsing with Graph Convolutional Networks",
+ "abstract": "We study the problem of integrating syntactic information from constituency\ntrees into a neural model in Frame-semantic parsing sub-tasks, namely Target\nIdentification (TI), FrameIdentification (FI), and Semantic Role Labeling\n(SRL). We use a Graph Convolutional Network to learn specific representations\nof constituents, such that each constituent is profiled as the production\ngrammar rule it corresponds to. We leverage these representations to build\nsyntactic features for each word in a sentence, computed as the sum of all the\nconstituents on the path between a word and a task-specific node in the tree,\ne.g. the target predicate for SRL. Our approach improves state-of-the-art\nresults on the TI and SRL of ~1%and~3.5% points, respectively (+2.5% additional\npoints are gained with BERT as input), when tested on FrameNet 1.5, while\nyielding comparable results on the CoNLL05 dataset to other syntax-aware\nsystems.",
+ "authors": "Emanuele Bastianelli, Andrea Vanzo, Oliver Lemon",
+ "published": "2020-11-26",
+ "updated": "2020-11-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "main_content": "Introduction In this paper we focus on the problem of integrating syntactic features in a neural architecture for the Frame-Semantic parsing (Das et al., 2014) process. Frame-semantic parsing is the task of extracting full semantic frame structures from text, as de\ufb01ned by Frame Semantics theory (Fillmore, 1985). From a theoretical perspective, Frame-Semantic parsing can be decomposed into three sub-tasks: 1) Target Identi\ufb01cation (TI) \u2013 identifying target words acting as lexical units; 2) Frame Identi\ufb01cation (FI) \u2013 disambiguating each target into a possible frame; and 3) Semantic Role Labeling (SRL) \u2013 extracting all the possible frame elements for a given frame. Early neural approaches have focused in this regard on the integration of features extracted from dependency trees, both for the FI and SRL tasks (Hermann et al., 2014; Kshirsagar et al., 2015; Swayamdipta et al., 2017), with positive results. Amongst all, SRL is the task that has received more attention when investigating methods for injecting syntax into neural models, mostly due to the strict correlation between syntax and argument structures (Punyakanok et al., 2008). Several solutions have been proposed, setting new baselines over general Frame-semantic parsing and speci\ufb01c SRL corpora. These include the use of dependency path embeddings (Roth and Lapata, 2016), the application of Graph Convolutional Networks (GCNs) to learn representations of the dependency graphs (Marcheggiani and Titov, 2017), or restricting the set of candidate arguments using pruning algorithms (He et al., 2018b). Multi-task learning has been also applied, either directly supervising attention to learn dependency parsing (Strubell et al., 2018) for both TI and SRL, or to implicitly bias learned encoded representations when jointly training a simpli\ufb01ed syntactic dependency parser (Swayamdipta et al., 2018), or a semantic dependency parser for both FI and SRL (Peng et al., 2018). Although effective, these approaches have focused on exploiting syntactic dependencies rather than constituency information, partly because dependencies are more suited to be encoded as features or learned through attention mechanisms. Semantic roles are technically provided over syntactic constituents, which directly cast argument boundaries over word sequences. This is demonstrated also by earlier work on SRL, which relied on constituency derived features (Gildea and Jurafsky, 2002; Xue and Palmer, 2004; Punyakanok et al., 2008). It follows that using constituency information should be bene\ufb01cial, especially because reconstructing argument boundaries through dependencies would require an unbounded number of hops among words, making the problem hard to model in neural architectures (Marcheggiani and Titov, arXiv:2011.13210v1 [cs.CL] 26 Nov 2020 \f2019). Following this idea, two recent approaches have attempted to rely on such constituency information to improve SRL performance. Wang et al. (2019) use linearised representations of constituency trees in different learning settings, either by extracting salient features, by multi-task learning, or by combining both approaches in an autoencoding fashion. Marcheggiani and Titov (2019), instead, train a GCN with the SRL objective to learn constituent representations, which are then infused into words through the same GCN via the message-passing operation (Scarselli et al., 2009). In this paper, we foster the same idea of relying on constituency information for every sub-task of Frame-semantic parsing, namely TI, FI, and SRL. We train a GCN to learn speci\ufb01c constituency representations, which are used in turn to compute syntactic paths between constituency nodes. Our approach is similar to that of Marcheggiani and Titov (2019), although it signi\ufb01cantly differs in: i) the initialisation and topology of the underlying graph; ii) the lower number of required parameters; and iii) the way that syntactic information is infused in every word representation, i.e. computing node-to-node syntactic paths. We show that our approach improves the state-of-the-art over the main Frame-semantic parsing benchmark, i.e. the FrameNet corpus (Baker et al., 1998), on the single TI and SRL tasks, and on FI in a joint-learning setting. Moreover, we demonstrate the generality of the approach by testing the same network on the CoNLL 2005 dataset (Carreras and M` arquez, 2005). 2 Learning Constituent Representations with Graph Convolutional Networks In this work, we take inspiration from the seminal work on SRL by Gildea and Jurafsky (2002) to design constituent-based features. In their work, Gildea and Jurafsky found discriminant features for a given constituent of a tree to be the path between the target predicate and the constituent itself. Based on the same concept, our idea is to use the path connecting each token with a task-speci\ufb01c reference node as a feature in a sequence labelling setting. For example, following Figure 1, the path \u2191VBD\u2191VP\u2193NP\u2193JJ between the predicate had and the word little may provide information to discriminate little as the beginning of a potential argument span in SRL. In order to encode a full constituent path as a I B B O I Figure 1: Constituency path between the target predicate had and the word little. At the bottom, the IOB tagging of the corresponding argument spans. feature, node representations have to be learned to properly enrich token encodings inside the network. To this end, we train a GCN (Kipf and Welling, 2017), which operates over a graph representing the constituency tree, learning node (constituent) representations through convolutional steps. According to the message-passing approach, the representation of a constituent is updated with all the representations of the neighbouring nodes at each step. Formally, at the (l + 1)-th step, the network evaluates the representation H(l+1) = LN(\u03c3(AH(l)W (l) + b(l))), where LN is a layer normalisation, \u03c3 is the ReLU activation function, A \u2208[0, 1]N\u00d7N is the adjacency matrix of the graph corresponding to a tree, H(l) \u2208RN\u00d7D is the matrix of activations of the l-th layer, while W (l) \u2208RD\u00d7E and b(l) \u2208RD\u00d71 are layer-speci\ufb01c trainable weight matrix and bias. N is the number of nodes in the tree, while D and E are respectively the input and output dimensionality of the l-th layer. H(0) = X correspond to the matrix of input representations. We initialise X with trainable embeddings associated with each constituent type, which are learned together with the other representations. The GCN is trained using signal from the task-speci\ufb01c objective function. Each father-children relation of a constituency tree represents the production rule of a constituency grammar. We propose to rely on this structural relationship to better pro\ufb01le node representations in our network. To this end, we arrange the graph representing the tree, encoded through A, only with edges directed from children to father, so that each constituent is speci\ufb01cally pro\ufb01led according to the \fS NP NN VP VBD NP JJ VP TO VB + Rachel had little to say \u2026 H(1) X p2 H(l) GCN Figure 2: Process of learning constituent encodings and evaluating path features p2 from the node had over the tree in Figure 1. Detail of feature evaluation for the word little. production rule it corresponds to. Self-loops are also added to feed each constituent with its own representation. Given the constituent encodings c1, ..., cN composing the rows of the output matrix H(l), a path feature pi,j between the nodes i and j is evaluated as pi,j = X k\u2208Pi,j ck, (1) where Pi,j is set of indexes of nodes connecting the shortest path between i and j. Figure 2 summarises the whole process of learning constituent representations and evaluating path features pi,j. 3 Integrating Constituency Features in Neural Frame-Semantic Parsing The process of extracting frame-semantic structures from text typically involves three tasks. Target words evoking frames, i.e. possible lexical units, are identi\ufb01ed in a sentence during the Target Identi\ufb01cation (TI) task. Frame Identi\ufb01cation (FI) aims to classify each lexical unit l into a possible frame f. For each retrieved l and f, SRL is performed to identify and classify the set of related arguments. In this paper, we want to assess whether our approach of encoding constituency information is bene\ufb01cial for each of these sub-tasks. In order to pair textual information with the constituency path encodings out of our GCN, we evaluate two encoded sequences a = a1, ..., an and b = b1, ..., bn through the encoding block showed in Figure 3, which computes: a = LN(BiLSTM(e \u2295proot) + e), b = LN(BiLSTM(e \u2295pl) + e), (2) where LN is a normalization layer, BiLSTM is a bi-directional LSTM encoder (Schuster and Paliwal, 1997) and e = e1, ..., en is the sequence of Text+POS Embeddings GCN LayerNorm had Rachel little to say || BiLSTM + b2 b1 b3 b4 b5 proot e LayerNorm || BiLSTM + a2 a1 a3 a4 a5 pj b = a = Figure 3: The two encoding backbones. The textual embeddings are fed with the token sequence, while the GCN receives the corresponding syntactic tree as input. input textual embeddings of a sentence. Residual connections are applied here to e to allow the gradient to \ufb02ow when \ufb01ne-tuning them. The sequences proot = p1,root, ..., pn,root and pl = p1,l, ..., pn,l are sequences of token-wise constituency path features, evaluated as in Equation 1, where the reference node j has been set as the root of the tree and a predicate target word corresponding to the lexical unit l.1 To be consistent with (1), when using the pi,j notation, the positional index i of a word in the input text will correspond to its node index in the tree.2 Therefore the encoded sequences a and b contain the root-centred path features and the predicate-centered path features, respectively. The TI and FI tasks rely only on the \ufb01rst one, whereas SRL makes use of both the representations, as explained in the following. 3.1 Target Identi\ufb01cation The TI task is shaped as a sequence labelling task using the IOBC labelling scheme, where the label C is used to deal with discontinued spans. Given the input sequence x = x1, ..., xn, a Conditional Random Field (CRF) layer (Lafferty et al., 2001) is applied to obtain the sequence of possible target spans t = t1, ..., tn, with ti \u2208{B-Lu, I-Lu, C-Lu, O}. For example, the sentence \u201cI\\O tried\\B-LU setting\\B-LU things\\O down\\C-LU\u201d has two lexical units: tried and setting down. The CRF thus mod1We use the \ufb01rst word for multi-token targets. 2Since we do not consider the word level in the tree, we create a correspondence between POS-tags and words in term of indexes. \fels the conditional probability3 P(t|x) = 1 Z exp( n X i=1 Exi,ti + n X i=0 Tti,ti+1) where emission scores Exi,ti are computed from the syntactic-augmented encodings a1, ..., an, and transitions Tti,ti+1 are learned during training. The most likely label sequence is computed through Viterbi decoding at test time. Given a batch of D sentences, the network is trained by minimising the loss LTI = \u22121 D PD i=1 logP(ti|xi). 3.2 Frame Identi\ufb01cation Frame Identi\ufb01cation is treated as a multiclassi\ufb01cation problem over the targets, i.e. lexical units, identi\ufb01ed during the TI. We model the probability of a lexical unit l evoking a frame f as P(f|l) \u221dexp(W (3) \u03c3(W (2) \u03c3(W (1) tl))), where \u03c3 is the Leaky-ReLU activation function (Maas et al., 2013), and tl is an encoded representation of l de\ufb01ned as tl = X m\u2208T l am. (3) with T l the set of word indexes of the target span for the lexical unit l, and am are encodings from the sequence in Equation 2. The FI is trained by minimising the average cross-entropy loss LFI = \u22121 D D X i=1 1 |Ti| X l\u2208Ti logP(f|l), where Ti is the set of gold targets for the i-th example. A mask obtained from the FrameNet ontology is applied to penalise frames that are not evoked by the identi\ufb01ed lexical unit. 3.3 Semantic Role Labeling Semantic Role Labeling can be operationally decomposed into Argument Identi\ufb01cation (AI) and Argument Classi\ufb01cation (AC). SRL has been recently tackled with two main approaches. On the one hand, AI is bypassed by computing all the possible sentence subsequences, and then classifying each subsequence against the set Af \u222a\u2205of possible frame elements for a given frame f \u2208F, where \u2205 is the no-frame-element label (Yang and Mitchell, 3From now on, we drop the condition on the learned parameters in the probability for the sake of readability. I B B Bilinear b2 b1 b3 b4 b5 CRF CRF Layer O I + Linear + tanh I B B O I || || Lin.+tanh e f e l t l OWNER POSSESSION had Rachel little to say Linear + tanh Figure 4: AI and AC sub-networks on the right and on the left, respectively. 2017). This approach may lead to limiting the maximum length of the computed subsequences, leaving out some wide-covering arguments. On the other hand, AI and AC have been tackled together as an IOB-based sequence labelling task, where the label space has been de\ufb01ned as {{B, I} \u00d7 AF} \u222a{O} (Strubell et al., 2018). When it comes to datasets like FrameNet, where |AF| = 725, the number of parameters increases drastically, especially when bilinear matrix operations are applied to score tokens against predicates, where the sole bilinear layer can reach up to ~95M parameters. To overcome this issue, we treat the AI and AC as two separate tasks, making the number of parameters drop from 95M to ~3M4. We use a hierarchy of two CRFs, one for each task (see Fig. 4). The \ufb01rst extracts the argument span from a sentence representation using a bilinear operation. The second, instead, operates over the resulting sequence of spans, assigning a frame element label to each of them. This second CRF helps in contextualising frame elements with each other. Argument Identi\ufb01cation (AI) For a frame f and related lexical unit l, the AI task is modelled as the conditional probability P(s|x, l, f), where s = s1, ..., sn is a sequence of label in the IOB2 notation, i.e. si \u2208{B, I, O}, denoting the argument spans. Similarly as in Section 3.1, we employ a CRF to model the conditional probability, with the difference that emission scores are computed through a bilinear operation between tokens and predicate representations, in order to capture interactions between each token xi and the current pair (l, f). Given l, f and the encoding of the i-th token 4These numbers indeed are proportional to the dimensionality of the inputs. However, their ratio would be the same. \fbi from Eq. 2, we compute per-frame IOB2 token emissions through the bilinear operation pr(l,f)\u22a4U pbi, pr(l,f) = tanh(V (1)z(l,f)), (4) pbi = tanh(V (2)bi), with z(l,f) = el \u2295tl \u2295ef, and V (1), V (2) are trainable weight matrices. Here, el and ef are lexical unit and frame learned embeddings, while tl is the target representation evaluated in Equation 3. Argument Classi\ufb01cation (AC) Given a sequence of argument spans sf 5 for a frame and lexical unit (l, f) from the AI step, the AC is modelled as the conditional probability P(a|x, l, f, sf), where a = a1, ..., au is a sequence of frame element labels, with each aw \u2208Af being an frame element label associated with the w-th argument span from sf. Single vector representations rw for each span are computed as rw = X m\u2208Sf w bm, where Sf w is the subset of word indexes of the w-th argument span, extracted from sf. Each vector in the sequence r1, ..., ru is then concatenated with the predicate representation z(l,f) from Equation 3.3, and projected through qw = tanh(Y rw \u2295 z(l,f)). P(a|x, l, f, sf) is evaluated with a CRF Layer which computes emission probabilities over the inputs q1, ..., .qu. The SRL is trained by optimising the loss LSRL given by \u22121 D D X i=1 1 |Fi| X (l,f,s)\u2208Fi h logP(s|xi, l, f)+ logP(a|xi, l, f, sf) i , where Fi = {\u27e8l1, f1, s1\u27e9, ..., \u27e8lq, fq, sq\u27e9} is the set of gold frame annotations for the i-th example. At inference time, Viterbi decoding is applied for both the AI and AC tasks to evaluate the most probable tag and frame element label sequences. A theorybased mask extracted from the FrameNet ontology is applied to the emission matrix to penalise frame element labels that are not associated with the current frame. 4 Experimental Evaluation This section reports the evaluation procedure and results obtained by the proposed model on the different tasks with respect to existing approaches. 5We drop the l subscript for the sake of readability. 4.1 Experimental Setup Constituency path features in our architectures are validated \ufb01rstly on FrameNet 1.5 (FN15) (Baker et al., 1998), and on CoNLL-2005 (CN05) (Carreras and M` arquez, 2005) to verify the generality of the approach. For both datasets, we apply the original data split as in Das et al. (2014) for FN15, and as in Carreras and M` arquez (2005) on CN05. Results are obtained through the of\ufb01cial evaluation of the SemEval07 shared task for framesemantic parsing (Baker et al., 2007) (FN15) and the CoNLL-2005 shared task for Semantic Role Labeling (CN05).6 The model is implemented using PyTorch (Paszke et al., 2017) and AllenNLP (Gardner et al., 2017), and trained on RTX 2080Ti GPUs. The best hyperparameters are obtained tuning the model on the development set, via grid search using early stopping. We concatenate POS tag embeddings to input text embeddings, obtained using spaCy (Honnibal and Montani, 2017) for FN15, or directly from gold annotations for CN05. Constituent trees are obtained through the Berkeley Neural Parser (Kitaev and Klein, 2018), using the English pretrained model on FN15; for CN05 we generate the syntactic trees of the development and test sets using the model obtained on the CN05 training set. For the GCN, only l = 2 layers have been used, to make each constituent take representations only from its children and grandchildren. We use the Adam optimizer (Kingma and Ba, 2014) with weight decay \ufb01x (BERTAdam), with an initial learning rate of 2 \u00d7 10\u22125, which is decreased once no improvements are observed for 5 epochs. We regularise the model using dropout on encoders and feed-forward layers, while L2 regularisation is used on the CRFs\u2019 transition matrices and bilinear weight matrix.7 We test different con\ufb01gurations of textual embeddings (OURS-*), disabling or enabling the GCN features (*-GCN). For example, in OURS-ELMo we use ELMo (Peters et al., 2018) as input layer, without syntax features. We also include a joint learning setting (OURS-BERT-GCN-JL), where the three tasks are jointly optimised by summing the losses (L = LTI + LFI + LSRL) and BERT (Devlin et al., 2019) is used as input layer. 6https://www.cs.upc.edu/\u02dcsrlconll/soft. html 7All the code, experimental setup, data and parameter con\ufb01gurations for all the experiments available at www.Anon. \fModel P R F1 Das et al. (2014) 37.5 57.5 45.4 Swayamdipta et al. (2017) \u2212 \u2212 73.2 OURS-ELMo 69.84 78.88 74.08 OURS-ELMo-GCN 70.08 78.72 74.15 OURS-BERT 71.71 78.51 74.96 OURS-BERT-GCN 70.66 84.12 76.80 OURS-BERT-GCN-JL 72.59 79.43 75.86 Table 1: Precision, Recall and F1 of the Target Identi\ufb01cation task. Model Acc Das et al. (2014) 83.6 Hermann et al. (2014) 88.4 Hartmann et al. (2017) 87.6 Yang and Mitchell (2017) 88.2 Peng et al. (2018) 89.9 OURS-ELMo 88.89 OURS-ELMo-GCN 88.82 OURS-BERT 89.90 OURS-BERT-GCN 89.83 OURS-BERT-GCN-JL 90.10 Table 2: Frame Identi\ufb01cation results using gold targets, in terms of Accuracy. 4.2 FrameNet 1.5 Results In this section, we report the evaluation on FN15 for each tasks presented in Section 3. Target Identi\ufb01cation Table 1 shows the results of TI. In addition to neural state-of-the-art systems, we also report the rule-based baseline computed on the FN15 test set followingDas et al. (2014). The proposed architecture outperforms existing approaches in the literature, improving the stateof-the-art model in Swayamdipta et al. (2017) by ~1% of F1 on comparable inputs, and an additional 2.65% points are gained when using BERT. The syntax encoding contribution seems to be evident across the different input layers, with a more consistent improvement (1.84%) on BERT. Joint learning here acts as a regulariser, improving the Precision with respect to the single task setting. The model exposes good generalisation capabilities, correctly identifying 46.47% of the 402 out-of-vocabulary targets contained in the test set. However, the \ufb01nal performance of TI is not easy to assess, mostly due to the well-known issue of the FN1.5 test set of missing a considerable amount of unannotated targets (Yang and Mitchell, 2017). Frame Identi\ufb01cation In Table 2 we report the results of FI using gold targets. All the con\ufb01guModel P R F1 Das et al. (2014)\u2020 65.6 53.8 59.1 Kshirsagar et al. (2015)\u2020 66.0 60.4 63.1 Yang and Mitchell (2017)\u2020 70.2 60.2 65.5 Swayamdipta et al. (2018) 69.2 69.0 69.1 Marcheggiani and Titov (2019) 69.8 68.8 69.3 OURS-ELMo 63.89 67.36 65.58 OURS-ELMo-GCN 72.02 73.70 72.85 OURS-BERT 71.19 74.26 72.69 OURS-BERT-GCN 74.23 76.94 75.56 OURS-BERT-GCN-JL 74.56 74.43 74.50 Table 3: Semantic Role Labeling results with gold targets and frames. rations are comparable with results in literature, while OURS-BERT-GCN-JL outperforms the stateof-the-art model from Peng et al. (2018) by 0.2, suggesting that jointly learning the whole framesemantic parsing process provides shared representations that result bene\ufb01cial for the FI. However, this result is in line with Peng et al. (2018), where multi-task learning is used to exploit the contribution of a purely syntactic task. Overall, our features do not seem to provide a direct contribution for FI when trained in a single-task fashion. Semantic Role Labeling SRL results are shown in Table 3. The proposed model obtains state-ofthe-art results, outperforming all the existing approaches on SRL with gold targets and frames, with an improvement of 3.55% over Marcheggiani and Titov (2019) in comparable settings, and gaining additional 2.71 points when applying BERT. SRL is the task where syntax encodings provide the largest contribution, especially when paired with the ELMo contextual embeddings. In this setting, GCNs allow to gain up to 7.27% of F1. This behaviour is mainly due to the nature of the SRL task. In fact, as already reported in Section 1, argument boundaries are de\ufb01ned by the syntactic constituents spanning sub-trees. Such relationships seem to be well captured by our constituency path features. However, when using BERT as input layer, such a large improvement is not observed (i.e. \u201conly\u201d 2.87%). This is due to the deeper contextual representations provided by BERT, whose Transformerbased architecture already captures meaningful representations of the sentence\u2019s syntactic structure. In fact, it BERT\u2019s ability of specialising some of its attention heads to recover short-range syntactic dependencies has been observed (Htut et al., 2019; Jawahar et al., 2019; Hewitt and Manning, 2019); \fSRL Input NO-GCN GCN gain GloVe 49.13 68.23 +19.1 ELMo 65.58 72.85 +7.27 BERT 72.69 75.56 +2.87 Table 4: Ablation on the use of constituency features with different input layers for the SRL on FN15. here we show that pairing this information to explicit constituent encoding allows to produce richer representations. Jointly learning the tasks does not result in learning better shared representations for SRL, although the higher Precision suggests a regularisation bene\ufb01t of such a learning pattern. 4.3 Impact of syntax: ablation study We \ufb01nally run an ablation study on FN15, in order to show the clear contribution of constituency path features when changing different input textual embeddings. For this analysis, we select SRL, because it seems to be the task bene\ufb01tting the most from the syntactic features. Table 4 reports the results of this ablation. It is interesting to see how the contribution of constituency feature encodings decreases as richer and more complex word embeddings are introduced. In fact, contextual embeddings like ELMo and BERT already encode some sort of dependency relations. However, results show that our constituency features complement such syntactic information. The contribution of syntax is strongly evident especially when using GloVe noncontextual embeddings, bringing the model 1 point from the previous state-of-the-art (Marcheggiani and Titov, 2019), which uses contextual embeddings, with an absolute improvement of 19.1%. 4.4 CoNLL-2005 Results In the following, we report the experiment on CN05, performed to assess the ability of the proposed model to scale to similar problems. We use the hyperparameters obtained on FN15, and tune the best epoch on the CN05 development set. Target Identi\ufb01cation/Predicate Detection Table 5 shows the results of Target Identi\ufb01cation (or Predicate Detection). For fairness, we report results of our model without contextual embeddings, since other approaches in the literature rely on these input layers. The proposed model obtains state-ofthe-art results for in-domain examples, whereas it seems to suffer from over\ufb01tting when applied to out-of-domain data. Model P R F1 WSJ Test He et al. (2017) 94.5 98.5 96.4 Strubell et al. (2018) 98.9 97.9 98.4 OURS-GloVe 98.39 99.01 98.70 OURS-GloVe-GCN 98.64 98.88 98.76 Brown Test He et al. (2017) 89.3 95.7 92.4 Strubell et al. (2018) 95.5 91.9 93.7 OURS-GloVe 90.15 95.65 92.82 OURS-GloVe-GCN 90.55 92.91 91.71 Table 5: Predicate detection results on the CN05 dataset. Semantic Role Labeling We report the results of our approach for SRL on CoNLL-2005. For such analysis, we use BERT as input layer, bestperforming on FN15. Table 6 shows the results of such experiments. Although our model does not overcome state-of-the-art results, it shows its competitiveness. Both on the in-domain (WSJ) and out-domain (Brown) test sets, syntactic encodings provide an important contribution, consistently improving the performance of the network. The smaller gap between the syntactic and nonsyntactic one with respect to results over FN15 may be partially due to the different (and higher) complexity of the CN05 constituency trees. In fact, when looking at the constituent vocabulary, we observe that the small amount of available constituents in FN15 (74) is not even comparable to the vocabulary size of CN05 constituents (1170). 5 Related Work Graph Neural Networks to encode syntax The structural information conveyed by syntactic trees has been encoded in neural models through Graph Neural Networks (GNNs) (Scarselli et al., 2009) in several text processing applications. The Syntactic GCN framework has been applied to incorporate syntactic (Bastings et al., 2017) and semantic (Marcheggiani et al., 2018) structures by applying GCN to Neural Machine Translation. On the same task, Beck et al. (2018) build a graph-tosequence encoder using a Gated GNN (Li et al., 2016), which accounts for the whole dependency structures by adding dependency labels as additional nodes to the graph. Syntax encoding has been investigated also in the context of the Event Extraction task, that is the extraction of speci\ufb01ed \fModel P R F1 WSJ Test He et al. (2018a) 84.2 83.7 83.9 Tan et al. (2018) 84.5 85.2 84.8 Li et al. (2019) 87.9 87.5 87.7 Strubell et al. (2018)\u2020 86.02 86.05 86.04 Ouchi et al. (2018) 88.2 87.0 87.6 Wang et al. (2019) \u2212 \u2212 88.2 Marcheggiani and Titov (2019) 87.7 88.1 87.9 OURS-BERT 87.01 87.36 87.18 OURS-BERT-GCN 87.46 87.87 87.66 Brown Test He et al. (2018a) 74.2 73.1 73.7 Tan et al. (2018) 73.5 74.6 74.1 Li et al. (2019) 80.6 80.4 80.5 Strubell et al. (2018)\u2020 76.7 76.4 76.5 Ouchi et al. (2018) 76.0 70.4 73.1 Wang et al. (2019) \u2212 \u2212 79.3 Marcheggiani and Titov (2019) 80.5 80.7 80.6 OURS-BERT 78.76 79.56 79.16 OURS-BERT-GCN 80.44 81.21 80.82 Table 6: SRL results with gold predicates on the CN05 dataset. \u2020 models do not use contextual embeddings. classes of events from texts. Nguyen and Grishman (2018) apply the Syntactic GCN framework to inject dependency information to characterise event detection. Liu et al. (2018), instead, leverage syntactic dependency shortcuts to build local sub-graph representations of each node, encoded through an Attention-based GCN. Graph convolutions have also been applied to learn syntactic and semantic embeddings (Vashishth et al., 2019) in an unsupervised fashion, which are shown to provide advantages when integrated in other tasks. Finally, Li et al. (2018) propose a Segment Graph Convolutional and Recurrent NN, which operates over word embedding and syntactic dependencies, to classify relations from clinical notes. Syntax-aware neural models for Framesemantic parsing Identifying solutions to model syntax in neural models to improve Frame-semantic parsing related tasks has been a main concern in the community in the last \ufb01ve years. These have ranged from manual feature engineering (Hermann et al., 2014; Kshirsagar et al., 2015; Swayamdipta et al., 2017), to learning dependency path embeddings (Roth and Lapata, 2016), to encoding syntactic dependency structures via GCNs (Marcheggiani and Titov, 2017). Learning syntactic (Strubell et al., 2018) or semantic (Peng et al., 2018) dependency parsing as auxiliary tasks in a multi-task learning settings has found successful application, especially when the sub-task outputs were directly injected as features. Only two approaches have tackled the problem of integrating constituency information. Wang et al. (2019) propose to linearise the constituency tree using the model from G\u00b4 omez-Rodr\u00b4 \u0131guez and Vilares (2018), and apply a combination of learning techniques to improve sequence labelling performance. Marcheggiani and Titov (2019), instead, apply a GCN over the constituency tree to learn constituent boundaries to be injected in word sequence representations. In this work, we also apply a GCN, although our primary aim is to directly learn constituent representations. Whereas Marcheggiani and Titov start from an empty tree updated only with word embedding features, we initialise it with constituent speci\ufb01c embeddings. Moreover, we rely on a directed graph to pro\ufb01le constituent representations as the grammar rules they correspond to. This con\ufb01guration allows us to learn a more compact model, with a single matrix of parameters for each layer, against the six weight matrices required by their approach.8 Finally, we inject syntactic representations into words by explicitly computing constituency path features. 6 Conclusions In this work, we investigated the integration of structural information from a constituent tree in a neural model for Frame-semantic parsing. Constituent representations are learned through a GCN, and used to build constituency path features to be added to every word representation in a sequence. We tested our approach on all the Frame-semantic parsing sub-tasks, namely Target Identi\ufb01cation, Frame Identi\ufb01cation, and Semantic Role Labeling, showing that such features contribute mainly on the TI and the SRL tasks. Constituency path features can be applied to other sequence labelling based tasks, e.g. NamedEntity Recognition. Moreover, other modi\ufb01cations of GCNs have to be tested in this same framework, e.g. to assess whether Attention-based GCN may learn more re\ufb01ned constituent representations. Finally, these representations may be used in a nodeclassi\ufb01cation approach, inspired by seminal works (Gildea and Jurafsky, 2002), in an attempt to move away from the well-used sequence labelling model of recent years. 8Please refer to Marcheggiani and Titov (2019) for details."
+ },
+ {
+ "url": "http://arxiv.org/abs/2011.13205v1",
+ "title": "SLURP: A Spoken Language Understanding Resource Package",
+ "abstract": "Spoken Language Understanding infers semantic meaning directly from audio\ndata, and thus promises to reduce error propagation and misunderstandings in\nend-user applications. However, publicly available SLU resources are limited.\nIn this paper, we release SLURP, a new SLU package containing the following:\n(1) A new challenging dataset in English spanning 18 domains, which is\nsubstantially bigger and linguistically more diverse than existing datasets;\n(2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A\nnew transparent metric for entity labelling which enables a detailed error\nanalysis for identifying potential areas of improvement. SLURP is available at\nhttps: //github.com/pswietojanski/slurp.",
+ "authors": "Emanuele Bastianelli, Andrea Vanzo, Pawel Swietojanski, Verena Rieser",
+ "published": "2020-11-26",
+ "updated": "2020-11-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "main_content": "Introduction Traditionally, Spoken Language Understanding (SLU) uses a pipeline transcribing audio into text using Automatic Speech Recognition (ASR), which is then mapped into a semantic structure via Natural Language Understanding (NLU). However, this modular approach is prone to error propagation from noisy ASR transcriptions, and ASR in turn is not able to disambiguate based on semantic information. End-to-end (E2E) approaches on the other hand, can bene\ufb01t from joint modelling. One of the main bottlenecks for building E2E-SLU systems, however, is the lack of large and diverse datasets of audio inputs paired with corresponding semantic structures. Publicly available datasets to date are limited in terms of lexical and semantic richness (Lugosch et al., 2019b), number of vocalizations (Coucke et al., 2018), domain coverage (Hemphill et al., 1990; Dahl et al., 1994) and semantic contexts (Godfrey et al., 1992; Jurafsky and Shriberg, 1997). In this paper, we present the \u2217Authors contributed equally. User: \u201cMake a calendar entry for brunch on Saturday morning with Aaronson.\u201d Scenario: Calendar Action: Create entry Entity tags and lexical \ufb01llers: [event name: brunch], [date: Saturday], [timeofday: morning], [person: Aaronson] Figure 1: Example annotation from SLURP dataset. Spoken Language Understanding Resource Package (SLURP), a publicly available multi-domain dataset for E2E-SLU, which is substantially bigger and more diverse than existing SLU datasets. SLURP is a collection of ~72k audio recordings of single turn user interactions with a home assistant, annotated with three levels of semantics: Scenario, Action and Entities, as in Fig. 1, including over 18 different scenarios, with 46 de\ufb01ned actions and 55 different entity types as listed on https://github.com/pswietojanski/slurp.1 In order to further support SLU development, we propose SLU-F1, a new metric for entity prediction, which is speci\ufb01cally designed to assess error propagation in structured E2E-SLU tasks. This metric has 3 main advantages over the commonly used accuracy/F1 metric, aimed at supporting SLU developers: First, it computes a distribution rather than a single score. This distribution is (1) inspectable and interpretable by system developers, and (2) can be converted into a con\ufb01dence score which can be used in the system logic (akin to previously available ASR con\ufb01dence scores). Finally, the distribution re\ufb02ects errors introduced by ASR and their impact on NLU and thus (3) gives an indication of the scope of improvement that can be gained by E2E approaches. Using this metric, we evaluate 4 baseline systems that represent competitive 1Note that Action & Entities are also referred to as \u2018Intent\u2019. Entities consist of \u2018Tags\u2019 and \u2018Fillers\u2019, aka. \u2018Slots\u2019 and \u2019Values\u2019. arXiv:2011.13205v1 [cs.CL] 26 Nov 2020 \fpipeline approaches, i.e. 2 state-of-the-art NLU systems and 2 ASR engines. We conduct a detailed error analysis of cases where E2E could have made a difference, i.e. error propagation and semantic disambiguation. 2 Related Work The \ufb01rst corpora containing both audio and semantic annotation reach as far back as the Air Travel Information System (ATIS) corpus (Hemphill et al., 1990) and the Switchboard-DAMSL Labeling Project (Jurafsky and Shriberg, 1997). However, it was not until recently when the \ufb01rst E2E approaches to SLU were introduced (Serdyuk et al., 2018; Haghani et al., 2018). Since then, one of the main research questions is how to overcome data sparsity by e.g. using transfer learning (Schuster et al., 2019; Tomashenko et al., 2019), or pretraining (Lugosch et al., 2019b). Here, we present a new corpus, SLURP, which is considerably bigger than previously available corpora. In particular, we directly compare our dataset to the two biggest E2E-SLU datasets for the English language: The Snips benchmark (Coucke et al., 2018) and the Fluent Speech Command (FSC) corpus (Lugosch et al., 2019b). With respect to these resources, SLURP contains ~6 times more sentences than Snips, ~2.5 times more audio examples than FSC, while covering 9 times more domains and being on average 10 times lexically richer than both FSC and Snips, see Section 3.3. SLURP represents the \ufb01rst E2E-SLU corpus of this size for the English language. The only existing comparable project is represented by the CASTLU dataset (Zhu et al., 2019) for Chinese Mandarin. 3 SLURP data 3.1 Data Collection SLURP was collected for developing an in-home personal robot assistant (Miksik et al., 2020). First, we collected textual data by prompting Mechanical Turk (AMT) workers to formulate commands towards the robot, using 200 pre-de\ufb01ned prompts such as \u201cHow would you ask for the time/ set an alarm/ play your favourite music?\u201d etc. We carefully designed the prompts to avoid lexical priming and thus increase linguistic variability of the collected data. This data has been manually annotated at scenario, action and entity level, and released as a text-only NLU benchmark (Liu et al., 2019). The 0 20 40 60 80 100 Matched WER [%] 50 60 70 80 90 100 % of data included SLURP + EntityWER=0 SLURP 0 20 40 60 80 100 Matched WER [%] 50 60 70 80 90 100 Headset data Distant data Figure 2: Amounts of data in SLURP matching given WER levels. textual data also serves as gold standard transcriptions for the audio data. The audio data was collected in acoustic conditions matched to a typical home or of\ufb01ce environment. We asked 100+ participants to read out the collected prompts on a tablet and to provide demographic background information, see Table 1. Speech was captured at distance with a microphone array, but some users were also equipped with a close-talking headset microphone (though, distant and close-talk channels are not synchronised at the sample level). Most recording sessions lasted 1 hour and were split into 4 parts. In each part, the technician changed position of the microphone array in the collection place. Users were encouraged to vary their location in the room from utterance to utterance (seating, standing or walking), and for some utterances not to speak directly to the mic array in order to resemble realistic conditions. These parameters are not logged with the dataset, however, they do pose increased challenges for ASR (Marino and Hain, 2011). Female Male Native Non-Native Unk. 37.3% 32.2% 25.5% 44% 30.5% Table 1: Participants\u2019 demographic statistics. 3.2 Audio Data Processing For quality control of the audio data, we automatically veri\ufb01ed i) whether the participant uttered the right / complete SLU query as prompted and ii) if the \ufb01les were appropriately end-pointed. We used the transcriptions of two ASR systems (referred to as Multi-ASR and Google-ASR, see Sec 5.1). These systems were not estimated from SLURP acoustic data, thus remain unbiased and do not reinforce potential errors. First, we removed all data that failed to force-align to transcripts using Multi-ASR. Then for the remainder we derived \fthe SLU related con\ufb01dences based on the matched Word-Error Rate (WER) between textual prompts and the obtained ASR hypotheses (calculated for both utterance and entity \ufb01llers), as well as crossmic validation between close and distant microphones, see Figure 2 (Right). Note that the higher matched WER does not necessarily imply the \ufb01le lacks the expected content, as simply the \ufb01le could be more challenging to automatically recognise. At the same time, from SLU perspective, one does not necessarily need grammatically correct utterances, as long as the they carry the information necessary to understand and execute the query. Figure 2 (Left) shows that for nearly 60% of the data at least one ASR system achieved a perfect score (WER=0), and this increases to ~73% after including utterances with imperfect sentence error rates but correct entity \ufb01llers (EntityWER=0). After \ufb01ltering, SLURP comprises ~58 hours of acoustic material. See Table 2 for detailed statistics. In addition, we provide SLURP-synth following (Lugosch et al., 2019a), where we replace \ufb01ltered or missing recordings with synthetic vocalisations from Google\u2019s Text-to-Speech system2 using 34 different synthetic English voices. 3.3 Linguistic Analysis and Comparison In this Section, we compare SLURP with the most recent publicly available E2E-SLU datasets: The Fluent Speech Command (FSC) corpus (Lugosch et al., 2019b) and the Snips benchmark (Coucke et al., 2018), which are also set in the smart-home domain. Snips covers 10 domains. However, only 2 domains have been vocalised, resulting in ~6K audio \ufb01les. FSC, on the other hand, is considerably bigger than Snips in terms of audio recordings, including ~30k vocalisations. However, the provided semantics only cover a small subset of actions with no more than two \ufb01xed entity types as arguments. In the following, we compare these dataset along four dimensions in order to get a \ufb01rst estimate of SLURP\u2019s level of complexity. Audio analysis: Table 2 summarises the audio data for each dataset. Audio \ufb01les are differentiated in close and far range microphone. As shown, SLURP has ~1.8\u00d7 more speakers, more than double the audio \ufb01les than the biggest dataset FSC, however FSC has an higher audio-per-sentence ratio. Demographic statistics are reported in Table 1. 2https://cloud.google.com/ text-to-speech FSC Snips SLURP SLURP -synth Speakers 97 69 177 34 Audio \ufb01les 30,043 5,886 72,277 69,253 \u2013 Close range 30,043 2,943 34,603 \u2013 \u2013 Far range \u2013 2,943 37,674 \u2013 Audio/Sentence 121.14 2.02 4.21 3.87 Duration [hrs] 19 5.5 58 43.5 Avg. length [s] 2.3 3.4 2.9 2.3 Table 2: Audio \ufb01le statistics. Lexical analysis: Table 3 provides an overview of different measures of lexical richness and diversity, following (Novikova et al., 2017), using both lexicalised (LEX) and delexicalised (DELEX) versions of the datasets (delexicalisation is performed by replacing each entity span with the entity label). Note that delexicalisation has a more severe effect on FSC and Snips, which indicates that most of their lexical richness and diversity stems from entity names. On average SLURP has ~100\u00d7 more tokens, lemmas, bigrams and trigrams than FSC, and ~10\u00d7 more than Snips. In addition, we compute the following lexicographic measures using the Lexical Complexity Analyser (Lu, 2012). Lexical Sophistication (LS2) (Laufer, 1994) is de\ufb01ned as Ts/T, with Ts being the number of sophisticated types of (unique) words3 and T being the number of types of words in a dataset. The Corrected Verb Sophistication (CSV1) (Wolfe-Quintero et al., 1998) is evaluated as Tsvb/\u221a2Nvb, with Tsvb the number of types of sophisticated verbs and Nvb the total number of verbs in a dataset. The Mean Segmental Textto-Token Ratio (MSTTR) (Johnson, 1944) is the average Text-to-Token Ratio (TTR \u2013 T/N) over all the segments of 104 words, with N the number of words in a dataset. The MSTTR is used to capture the variation of classes of words. Again, SLURP shows higher levels of lexical sophistication and richness than the other datasets, especially in the delexicalised case. Note that lexicalised version of Snips contains many names of artists and bands in the music scenario, which contributes to enlarge the set of sophisticated words Ts. The only measure where SLURP doesn\u2019t outperform the other datasets is average sentence length. SLURP contains, among others, shorter interactions, such short acknowledgements, elliptic questions and atomic commands, whereas Snips is mostly composed of 3Sophisticated words are considered words not in the 2000 more frequent words in English language. 4Standard size of a segment for written text is 50, but we are here considering short utterances, so we lowered this number to 10. \fFSC Snips SLURP SLURP-synt LEX DELEX LEX DELEX LEX DELEX LEX DELEX Sentences 248 190 2,912 1,437 17,181 15,433 19,711 16,707 Average Sentence length 4.49 \u2013 7.48 \u2013 6.93 \u2013 7.27 \u2013 Distinct Tokens 96 89 2,182 271 6,467 3,774 5,974 3,553 Distinct Tokens occurring once 31 36 1825 120 3,007 1,778 2,799 1,676 Distinct Lemmas 102 92 2193 250 5,501 3,080 5,119 2,920 Distinct Bigrams 218 182 4,004 1,355 32,303 21,724 28,988 20,308 Distinct Bigrams occurring once 97 97 3,066 698 21,997 14,095 19,360 12,637 Distinct Trigrams 250 198 5,703 2,408 50,422 37,417 45,631 35,548 Distinct Trigrams occurring once 131 119 4,499 1,543 40,184 28,393 34,856 25,553 Lexical Sophistication (LS2) 0.35 0.31 0.87 0.41 0.79 0.69 0.79 0.68 Corrected Verb Sophistication (CVS1) 0.42 0.38 0.72 0.59 5.17 3.54 4.58 3.20 Mean segmental TTR (MSTTR) 0.71 0.82 0.78 0.86 0.92 0.96 0.93 0.96 Table 3: Analysis of Lexical diversity and sophistication. FSC Snips SLURP SLURP-synt 0 25 50 75 100 % L0 L1 L2 L3 L4 L5 L6 L7 Figure 3: Syntactic complexity on D-Level scale, where higher levels correspond to more complex, deeper syntactic structures. commands of similar length, often including multiword named entities. Syntactic analysis: Next, we use the D-Level Analyser (Lu, 2009) to evaluate the syntactic complexity of user utterances according to the revised D-Level scale (Covington et al., 2006), where higher levels correspond to more complex, deeper syntactic structures, e.g. 0-1 levels include simple sentences, while higher levels presents embedded structures, subordinating conjunction, etc. Figure 3 shows the percentages on the D-Level scale for each dataset. Overall, all the datasets present a majority of Level 0 and 1 sentences. This can be explained with the nature of the application domain, i.e. a smart-home assistant. FSC contains mostly Level 0 sentences (~89%), with some (~9%) Level 4 ones. 89% of Snips sentences fall into Level 0 and 1, against only 74% of SLURP. The remaining 11% of Snips are mostly Level 4 sentences, while SLURP appears more mixed, with even a ~5% of Level 7 sentences. Semantic Analysis: Finally, we compare the datasets according to their semantic content. SLURP is annotated with three layers of semantics, namely scenarios, actions and entities, where each FSC Snips SLURP SLURP -synt Scenarios 2 2 18 18 Actions 6 7 46 54 Entities 2 4 56 56 Tot. Entities 334 2,870 16,792 14,623 Entity/Sentence 1.35 0.98 0.97 0.65 Unique Entities 16 1,348 5,613 4619 Table 4: Semantic analysis of the number of scenarios, actions and entity types, the total number of annotated entities, and the number of unique entities, i.e. entities whose lexical \ufb01ller appears only once. sentence is annotated with one scenario and one action, see Fig. 1, similar to annotations used in (Budzianowski et al., 2018; Schuster et al., 2019). FSC and Snips contain actions and entities as well, although they do not explicitly annotate the scenarios, however these can be deducted from the dataset \ufb01le structure. The results in Table 4 show that SLURP\u2019s semantic coverage is 9 times wider than other datasets in terms of scenarios, and ~6.5 times in terms of actions, where a higher number of scenarios results in a higher number of actions. FSC has the highest entity/sentence ratio, though it only has 16 unique entities. Snips appears to be the dataset with highest Unique Entities/Total Entities ratio, ~50%, against ~33% of SLURP. Again, this is due to the frequent use of proper names. 4 SLURP Metrics The standard metric for evaluating E2E-SLU is accuracy, which is de\ufb01ned as \u201cthe accuracy of all slots for an utterance taken together \u2013 that is, if the predicted intent differs from the true intent in even one slot, the prediction is deemed incorrect\u201d (Lugosch et al., 2019b). However, this notion of accuracy is problematic when it comes to evaluating entities, as it does not account for the interplay between semantic mislabelling and textual misalignment. Nor does it differentiate between entity label and lexical \fGold: [event name: brunch], [date: Saturday], [timeofday: morning], [person: Aaronson] SLU: [event name: brunch], [date: Saturday], [date: morning], [person: Aron\u2019s son] Figure 4: Continued example from Figure 1: Errors in SLU entity tagging. \ufb01ller, as in Fig. 4, where lexical \ufb01ller is de\ufb01ned as span over tokens in the original sentence.5 Formally, given a sentence s, let E and \u02c6 E be the set of gold and predicted entities, respectively. Each ei = \u27e8li, fi\u27e9\u2208E is a tuple where li \u2208L is the label drawn from the list of available entity labels L, while fi = [tm, . . . , tn] is the lexical \ufb01ller, de\ufb01ned as a span of consecutive tokens of s such that 1 \u2264m \u2264n \u2264|s|. Similarly, predicted entities are of the form \u02c6 ek = \u27e8\u02c6 lk, \u02c6 fk\u27e9\u2208\u02c6 E. In spanbased metrics, two entities e1 and e2 are identical (e1 =:= e2) when both labels and lexical \ufb01llers are the same (l1 = l2\u2227f1 = f2). A match is thus found only whenever the gold and predicted entities are identical, i.e. ei =:= \u02c6 ek. This evaluation method holds in NLU because entities are tagged over the same textual sequence. When evaluating E2E-SLU, where entities are identi\ufb01ed out of a wave form, this strict coupling with the token sequence may no longer apply. Note that pipeline systems for SLU are affected as well since they operate over ASR transcribed sentences, which can consistently differ from the original gold transcription. To account for this mismatch, we propose SLUF1, a new metric which does not overly penalise misalignments caused by ASR errors. In addition, it is able to capture the quality of transcriptions and entity tagging errors at the same time in a single metric. As such, this metric allows to directly compare E2E and pipeline systems. In particular, SLU-F1 combines span-based F1 evaluation with a text-based distance measure dist, e.g. WER. The equality property =:= is relaxed by allowing gold and predicted entities (ei and \u02c6 ek) to match (ei =:= \u02c6 ek) when the corresponding labels are identical (li = \u02c6 lk), even when the \ufb01llers are not identical. In this case we increment the True Positives (TPs) by 1. To account for lexical distance/ mismatch, we compute the dist between gold and predicted \ufb01llers (dist(fi, \u02c6 fk)), and increment the False Positives (FPs) and False Negatives (FNs) 5In traditional NLU systems this is identi\ufb01ed with pairs of start-end tokens or chars, or token index spans. of this amount, as in Algorithm 1. In the case of a predicted entity label matching with more than one gold entity, e.g. when two or more entities with the same label are present, we opt for a non-conservative approach, selecting the gold annotation minimising the dist as a candidate. The assumption is that the pair of entities is most likely referring to the same text span. We use two distance functions to capture different aspects of possible transcription mistakes: WER (Word-F1) and the normalised Levenshtein distance on character level (Char-F1). WER is a strict token-level metric, which outputs errors/null matches whenever a mismatching or misalignment of tokens is observed. The character-based Levenshtein distance, on the other hand, offers the opposite perspective. By computing character-based similarities, it is much less susceptible to small variations of input strings, and thus better accounting for local transcription errors which do not affect NLU tagging. For example, Word-F1 will penalise small morphological differences e.g. singular vs. plural as in pizza vs. pizzas, which are often seen in transcriptions. This over-penalises NLU outputs, e.g. the tagging of pizzas may be semantically correct. Char-F1 on the other hand does not over-penalise NLU, but it also may provide a positive score when two \ufb01llers have similar characters, but are semantically and phonetically unrelated. In other words, Word-F1 shows the in\ufb02uence of ASR on NLU, whereas Char-F1 gives an indication of NLU performance despite transcription noise. These dist-F1 metrics (dist = Word or Char) metric are similar to the fuzzy matching mechanism proposed in (Rastogi et al., 2020). They fundamentally differ for the adopted string matching schema: any dist-F1 considers string ordering to score string similarity, while the fuzzy mechanism is instead order invariant. Consider the illustrative entity tagging example in Figure 4. Here, Aaronson has been wrongly transcribed into Aron\u2019s son, and morning has been wrongly tagged with date. A dist\u2212F1 will score the predicted entities as follows: both [event name: brunch] and [date: Saturday] contribute with a +1 to the TPs, since both label and \ufb01ller correspond to gold information. The wrong label associated with morning increases the FPs of 1, although it is correctly transcribed. It follows that the entity timeofday is not predicted, increasing the FNs of 1. Finally, [person: Aron\u2019s son] is correctly labelled, but its \ufb01ller is \fpartially wrong. It thus contributes to the TPs by 1, but FPs and FNs are both incremented by dist(Aaronson, Aron\u2019s son). Algorithm 1 dist-F1 for a sentence s Input E, \u02c6 E, TP, FP, FN \u21900 Ls \u2190set of gold entity labels in s dist \u2190a text-based distance metric Output: TP, FP, FN 1: for each \u02c6 e \u2208\u02c6 E do 2: if \u02c6 e.label \u2208Ls then 3: Pl \u2190{(e, \u02c6 e) | \u2200e \u2208E. e.label = \u02c6 e.label} 4: if Pl.size > 0 then 5: (e, \u02c6 e) \u2190arg min(e,\u02c6 e)\u2208Pl dist(e, \u02c6 e) 6: TP += 1 7: FP += dist(e.filler, \u02c6 e.filler) 8: FN += dist(e.filler, \u02c6 e.filler) 9: E.remove(e), \u02c6 E.remove(\u02c6 e) 10: else 11: FP += 1, \u02c6 E.remove(\u02c6 e) 12: end if 13: else 14: FP += 1, \u02c6 E.remove(\u02c6 e) 15: end if 16: end for 17: for e \u2208E do 18: FN += 1, E.remove(e) 19: end for Finally, we combine Word-F1 and Char-F1 in a single number SLU-F1, which evaluates the \ufb01nal performance over the sum of the confusion matrices obtained with Word-F1 and Char-F1.6 5 Experiments We now establish the performance of different baseline systems on the SLURP corpus. As demonstrated in Section 3.1, SLURP is linguistically more diverse than previous datasets, and therefore more challenging for SLU. We \ufb01rst provide an evaluation of two ASR baselines to show the complexity of the acoustic dimension. We then evaluate the semantic dimension, by testing the corpus against state-ofthe-art NLU systems. We \ufb01nally combine ASR and NLU, implementing several SLU pipelines. Note that so far, the direct comparison of E2ESLU with pipeline approaches are mainly limited to baselines developed on the same dataset, e.g. a multistage neural model in which the two stages that correspond to ASR and NLU are trained independently, but using the same training data (Desot et al., 2019; Haghani et al., 2018). We follow a different approach, which, as we argue, is closer to the 6The of\ufb01cial script for analysis and evaluation will be released with SLURP at https://github.com/ pswietojanski/slurp. real-life application scenario: We use competitive ASR systems and state-of-the-art NLU systems. 5.1 Acoustic evaluation We run the analysis of the SLURP acoustic complexity by testing 2 different ASR systems: Indomain ASR trained on SLURP data, and MultiASR, which leverages a large amount of out-ofdomain data. Both are built with the Kaldi ASR toolkit (Povey et al., 2011). Multi-ASR is a largescale system estimated from publicly available acoustic data pooled together \u2013 Acoustic data including, among others, LibriSpeech (Panayotov et al., 2015), Switchboard (Godfrey et al., 1992), Fisher (Cieri et al., 2004), CommonVoice (Ardila et al., 2019), AMI (Carletta, 2007) and ICSI (Janin et al., 2003), which is further augmented to increase environmental robustness following (Ko et al., 2017)7. In total, a time-delay neural network acoustic model (Peddinti et al., 2015) is trained on 24,000 hours of augmented audio material with lattice-free maximum mutual information objective (Povey et al., 2016). For decoding, we use a tri-gram Language Model (LM) that is an interpolation of an in-domain LM estimated from 60k voice-command sentences8 and a background LM estimated from Fisher transcripts. As shown in the \ufb01rst block of Table 5, Multi-ASR offers a competitive performance on this data when compared to the off-the-shelf Google-ASR.9 SLURP-ASR shares the overall pipeline with Multi-ASR, except the acoustic model is estimated from the 40 hours of SLURP training data (83 hours when pooled with SLURP-Synth) and bootstrapped from forced-alignments obtained with Gaussian mixture model build for Multi-ASR. Results for this scenario are reported in the second block of Table 5, where adding synthetic data shows 1.6% improvement. For comparison, estimating acoustic models from synthetic data alone (no augmentations) results in 98% WER on Test partition. Finally, we perform supervised acoustic domain adaptation (Bell et al., 2020) of Multi-ASR with SLURP-Train by a method proposed in (Swietojanski et al., 2016), which achieves the best perfor7System build while third author was with Emotech LTD. 8This includes SLURP-Train and additional 50k sentences that has been collected, but not annotated for NLU purposes. 9https://cloud.google.com/ speech-to-text/ tested on 20/05/2020 using the command and search model. Note, that these systems are not directly comparable as Multi-ASR bene\ufb01ts from speaker adaptation, and an in-domain LM data. \fDev Test Google-ASR 24.0 24.7 Multi-ASR 16.7 17.3 SLURP-ASR (Train) 23.7 23.8 SLURP-ASR (Train + Synth) 22.4 22.2 Multi-ASR + Adapt w/ SLURP 16.3 16.2 Table 5: SLURP WER for different ASR systems. mance by around 1% absolute on Test. In sum, the large out-of-domain Multi-ASR system performs better than the systems trained on in-domain SLURP data. Best results are achieved by using a pre-training approach, i.e. Multi-ASR adapted to SLURP. This shows that, despite SLURP\u2019s absolute size, the acoustic data is still too scarce to fully account for its lexical richness and noise conditions. As such, SLURP is a challenging dataset for ASR as well as for SLU. 5.2 Semantic evaluation System Descriptions: We evaluate SLURP against two state-of-the-art NLU models: HerMiT (Vanzo et al., 2019) and SF-ID (E et al., 2019). Both systems achieved state-of-the-art results on the NLU Benchmark (Liu et al., 2019) and on ATIS/Snips respectively. HerMiT\u2019s architecture is a hierarchy of self-attention mechanisms and Bidirectional Long Short-Term Memory (BiLSTM) encoders followed by Conditoinal Random Field (CRF) tagging layers. Its multi-layered structure resembles a top-down approach of Scenario, Action and, Entity prediction, where each task bene\ufb01ts from the information encoded by the previous stages, e.g. Entity detection can bene\ufb01t from sentence-level encodings. SF-ID\u2019s architecture is also based on attention, using a BiLSTM encoder and CRF tagger. The model de\ufb01nes two subnets that communicate through a reinforce vector. In order to compare with HerMiT\u2019s top-down approach, we choose the opposite Entity-\ufb01rst propagation direction for SFID, i.e. the entity detection task is executed \ufb01rst and its encodings are used to feed the Intent detection task. Note that while HerMiT uses a multi-layered annotation scheme (Scenario and Action), SF-ID can only handle a single layer of annotation. To this end, we generate another combined semantic layer, Scen Act, to feed SF-ID with a label composed by the concatenation Scenario and Action. Scenario and Action Prediction: We split SLURP in train, development and test as in Table 6. Train Dev Test Sentences 11514 2033 2974 Audio \ufb01les 50628 8690 13078 Tot. Entities 11367 2022 2823 Entity/Sentence 0.98 0.99 0.95 Total duration [hours] 40.2 6.9 10.3 Table 6: Data distribution of train, dev and test sets. Scenario Action Scen Act Gold/HerMiT 90.15 86.99 84.84 Gold/SF-ID 86.48 83.69 82.25 Multi/HerMiT 83.73 79.70 76.68 Multi/SF-ID 81.90 77.72 75.87 Google/HerMiT 81.68 76.58 73.41 Google/SF-ID 78.87 74.31 72.06 SLURP/HerMiT 82.31 78.07 74.62 Multi-SLURP/HerMiT 85.69 81.42 78.33 Table 7: System accuracy of Scenario and Action. We \ufb01rst evaluate accuracy for Scenario, Action and a combination of the two. Table 7 summarises the results, where the top two rows are upper bounds based on gold transcriptions. Note that even for the gold transcriptions, both NLU systems perform substantially below their state-of-the-art results on the NLU benchmark (HerMiT=87.55) and Snips respectively (SF-ID= 97.43). This further demonstrates the complexity of SLURP, which also makes it a challenging test bed for future research not only for SLU, but also NLU. When moving on to ASR transcribed data, the results in the middle of Table 7 show the Multi-ASR system in combination with HerMiT achieves top performance for all 3 tasks. Finally, the 3rd block reports HerMiT with ASR from in-domain SLURP audio data (also see Table 5). The results show that our best performing system, HerMiT with Multi-ASR + Adapt w/ SLURP, is only ~5% below the gold standard despite 16% WER. We hypothesise that this is due to robust Scenario and Action encodings, which we will further examine in our error analysis in Section 6. Entity Prediction: We now analyse the results for entity prediction in more detail using our proposed metric SLU-F1. The results in Table 8 con\ufb01rm that HerMiT is the stronger NLU system on goldtranscribed data and outperforms the other system combinations for SLU in combination with MultiASR. Again, these results suggest that the top-down information \ufb02ow of HerMiT (i.e. \ufb01rst decoding Scenario, then Action and lastly Entity in a sequence) is better suited for this complex dataset, which we will further demonstrate in the following. \fWordF1 CharF1 SLUF1 F1 Gold/HerMiT \u2013 \u2013 \u2013 78.19 Gold/SF-ID \u2013 \u2013 \u2013 69.87 Multi/HerMiT 67.78 71.38 69.53 62.69 Multi/SF-ID 65.82 68.92 67.33 60.15 Google/HerMiT 64.01 68.12 66.00 58.00 Google/SF-ID 62.73 65.37 64.02 56.54 SLURP/HerMiT 65.48 68.56 66.99 59.79 Multi-SLURP/HerMiT 69.34 72.39 70.84 64.16 Table 8: System performance on entity prediction No Errors ASR Errors NLU Errors ASR/NLU Errors 0 1,000 2,000 3,000 4,000 5,000 6,000 5,196 3,182 1,205 3,490 5,005 3,010 1,396 3,660 HerMiT SF-ID Figure 5: Error propagation: No Errors refer to the number of predicted entities that match the gold transcriptions perfectly. ASR Errors count the number of predictions where ASR outputs an unmatched candidate but the NLU system is nevertheless able to recover the correct entities from the transcriptions. NLU Errors count sentences where transcriptions are correct, but entities do not match. ASR/NLU Errors count the sentences where both ASR and NLU errors are present. 6 Error Analysis 6.1 Analysis of Error Propagation for different NLU Approaches We further describe the types of errors produced by HerMiT and SF-ID for Entity Prediction on noisy ASR data, as shown in Figure 5. Overall, HerMiT has lower error rates for all but ASR errors. Nevertheless, it is able to recover the correct entities from the transcriptions. These results indicate that HerMiT, using a top-down decoding approach \u2013 going from the more general Scenario to the more speci\ufb01c Action and Entity Prediction, is more robust to noise propagation than the bottom-up SF-ID system. 6.2 Expressiveness of the SLU-F1 Metric The results in Table 8 show that our proposed metrics Word-F1 and Char-F1 both produce the same ordering as F1. However, a Pearson\u2019s correlation between Word-F1 and Char-F1 shows that the two metrics are only weakly correlated (\u03c1 = 0.2, p \u226a0.0001), which con\ufb01rms that they are indeed measuring two different aspects despite producing the same \ufb01nal ordering. In addition to an overall performance score, the metrics give us a distribution of value ranges, which can give us insight on system behaviour. Figure 6 shows distributions of entity-level dist value ranges over the WER of the sentence for our top performing system HerMiT/Multi-ASR. For entity-WER (Figure 6a), the distribution shows high density of entities falling between sentence-WER= [0, 1] and entityWER= [0, 1]. When analysing sentences with correct transcriptions, i.e. sentence-WER=0, we \ufb01nd only NLU errors, due to span misalignments. When sentence-WER > 0, most of the entities are scored with a values either in (0, 0.5], or in (0.5, 1]. In the \ufb01rst case, we \ufb01nd NLU mistakes caused by shortening entity spans, e.g. \u201cfootball\u201d instead of \u201cfootball match\u201d. The second range includes span shortening and extensions, e.g. \u201cSaturday morning\u201d instead of \u201cSaturday\u201d, as well as many mis-transcribed entities, e.g. due to either morphological errors (singular vs. plural), or transcription errors. The distribution for entity-level normalised Levenshtein is less spiked, as shown in Fig. 6b. As for WER, all the entries with sentence-WER=0 and entity-Lev>0 correspond to correctly labelled entities, whose span has been shortened or extended. Entities assigned with character-based Lev values falling between (0, 0.2] mostly contain negligible ASR errors, such as morphological errors, compound merging or explosion, or general transcription mistakes, e.g. Sara vs. Sarah. Entities with Lev= (0.2, 0.5] comprise both ASR errors, as well as including minor NLU errors such as shortened or extended entity spans. When entityLev= (0.5, 0.8], we \ufb01nd mostly NLU errors due to wrong span tagging. Finally, two types of NLU errors fall in the range (0.8, 1.0]: Either span errors with a substantial mismatch in length with gold annotations, or more severe ASR errors. 7 Discussion SLURP is not only bigger, but also a magnitude more challenging than previous datasets. The purpose of this new data release is not to provide yet another benchmark dataset, but to provide a usecase inspired new challenge, which is currently beyond the capabilities of SOTA E2E approaches (due to scalability, lack of data ef\ufb01ciency, etc.). We have tested several SOTA E2E-SLU systems on SLURP, including (Lugosch et al., 2019b) which \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 4.0 entity wer 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 sentence wer 4021 175 149 1 15 0 14 0 3062 433 613 10 112 2 9 0 201 56 120 6 26 0 3 1 2 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 500 1000 1500 2000 2500 3000 3500 4000 (a) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 entity lev 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 sentence wer 4021 0 11 23 48 70 44 97 35 23 0 3062 68 166 131 150 183 95 116 101 96 45 201 10 23 20 19 32 20 31 14 22 19 2 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 500 1000 1500 2000 2500 3000 3500 4000 (b) Figure 6: Correlation between sentence-level WER (intervals of 0.5) and entity-level (a) WER values (intervals of 0.5), (b) normalised character-based Levenshtein values (intervals of 0.1). produces SOTA results on the FSC corpus. However, re-training these models on this more complex domain did not converge or result in meaningful outputs. Note that these models were developed to solve much easier tasks (e.g. a single domain). Developing an appropriate model architecture is left for future work. For this reason, in this work we focus on benchmarking existing approaches. We show that SOTA modular approaches are able to provide a strong baseline for this challenging data, which has yet to be met by SOTA E2E systems. We also argue that our modular baseline is closer to how real-world applications build SLU systems, nevertheless often overlooked when testing E2E systems. As such, we consider our SOTA modular baseline a major novel contribution. 8 Conclusion In this paper, we present SLURP, a new resource package for SLU. First, we present a novel dataset, which is substantially bigger than other publicly available resources. We show that this dataset is also more challenging by \ufb01rst conducting a linguistic analysis, and then demonstrating the reduced performance of state-of-the-art ASR and NLU systems. Second, we propose the new SLU-F1 metric for evaluating entity prediction in SLU tasks. In a detailed error analysis we demonstrate that the distribution of this metric can be inspected by system developers to identify error types and system weaknesses. Finally, we analyse the performance of two state-of-the-art NLU systems on ASR data. We \ufb01nd that a sequential decoding approach for SLU, which starts from the more abstract notion of scenario and action produces better results for entity tagging, than an approach which works bottom up, i.e. starting from the entities. Our error analysis suggests that this is due to the former approach being able to better account for noise by priming entity tagging, which is a more challenging task than scenario or action recognition. In future work, we hope that SLURP will be a valuable resource for developing E2E-SLU systems, as well as more traditional pipeline approaches to SLU. The next step is to extend SLURP with spontaneous speech, which would again increase its complexity, but also move it one step closer to real-life applications. Acknowledgements Thanks to Emotech Ltd and H. Zhuang for agreeing to release this data for research purposes. Special thanks to P. Mediano, M. Zhou and X. Chen for help with designing and organising data collection. This research received funding from the EPSRC project MaDrIgAL (EP/N017536/1), as well as Google Research Grant to support NLU and dialog research at Heriot-Watt University."
+ }
+ ],
+ "Andrea Vanzo": [
+ {
+ "url": "http://arxiv.org/abs/1910.00912v1",
+ "title": "Hierarchical Multi-Task Natural Language Understanding for Cross-domain Conversational AI: HERMIT NLU",
+ "abstract": "We present a new neural architecture for wide-coverage Natural Language\nUnderstanding in Spoken Dialogue Systems. We develop a hierarchical multi-task\narchitecture, which delivers a multi-layer representation of sentence meaning\n(i.e., Dialogue Acts and Frame-like structures). The architecture is a\nhierarchy of self-attention mechanisms and BiLSTM encoders followed by CRF\ntagging layers. We describe a variety of experiments, showing that our approach\nobtains promising results on a dataset annotated with Dialogue Acts and Frame\nSemantics. Moreover, we demonstrate its applicability to a different, publicly\navailable NLU dataset annotated with domain-specific intents and corresponding\nsemantic roles, providing overall performance higher than state-of-the-art\ntools such as RASA, Dialogflow, LUIS, and Watson. For example, we show an\naverage 4.45% improvement in entity tagging F-score over Rasa, Dialogflow and\nLUIS.",
+ "authors": "Andrea Vanzo, Emanuele Bastianelli, Oliver Lemon",
+ "published": "2019-10-02",
+ "updated": "2019-10-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "main_content": "Introduction Research in Conversational AI (also known as Spoken Dialogue Systems) has applications ranging from home devices to robotics, and has a growing presence in industry. A key problem in real-world Dialogue Systems is Natural Language Understanding (NLU) \u2013 the process of extracting structured representations of meaning from user utterances. In fact, the effective extraction of semantics is an essential feature, being the entry point of any Natural Language interaction system. Apart from challenges given by the inherent complexity and ambiguity of human language, other challenges arise whenever the NLU has to operate over multiple domains. In fact, interaction patterns, domain, and language vary depending on the device the user is interacting with. For example, chit-chatting and instruction-giving for executing an action are different processes in terms of language, domain, syntax and interaction schemes involved. And what if the user combines two interaction domains: \u201cplay some music, but \ufb01rst what\u2019s the weather tomorrow\u201d? In this work, we present HERMIT, a HiERarchical MultI-Task Natural Language Understanding architecture1, designed for effective semantic parsing of domain-independent user utterances, extracting meaning representations in terms of high-level intents and frame-like semantic structures. With respect to previous approaches to NLU for SDS, HERMIT stands out for being a cross-domain, multi-task architecture, capable of recognising multiple intents/frames in an utterance. HERMIT also shows better performance with respect to current state-of-the-art commercial systems. Such a novel combination of requirements is discussed below. Cross-domain NLU A cross-domain dialogue agent must be able to handle heterogeneous types of conversation, such as chit-chatting, giving directions, entertaining, and triggering domain/task actions. A domain-independent and rich meaning representation is thus required to properly capture the intent of the user. Meaning is modelled here through three layers of knowledge: dialogue acts, frames, and frame arguments. Frames and arguments can be in turn mapped to domain-dependent intents and slots, or to Frame Semantics\u2019 (Fillmore, 1976) structures (i.e. semantic frames and frame elements, respectively), which allow handling of heterogeneous domains and language. Multi-task NLU Deriving such a multi-layered meaning representation can be approached through a multi-task learning approach. Multitask learning has found success in several NLP 1https://gitlab.com/hwu-ilab/ hermit-nlu arXiv:1910.00912v1 [cs.CL] 2 Oct 2019 \fproblems (Hashimoto et al., 2017; Strubell et al., 2018), especially with the recent rise of Deep Learning. Thanks to the possibility of building complex networks, handling more tasks at once has been proven to be a successful solution, provided that some degree of dependence holds between the tasks. Moreover, multi-task learning allows the use of different datasets to train subparts of the network (Sanh et al., 2018). Following the same trend, HERMIT is a hierarchical multitask neural architecture which is able to deal with the three tasks of tagging dialogue acts, frame-like structures, and their arguments in parallel. The network, based on self-attention mechanisms, seq2seq bi-directional Long-Short Term Memory (BiLSTM) encoders, and CRF tagging layers, is hierarchical in the sense that information output from earlier layers \ufb02ows through the network, feeding following layers to solve downstream dependent tasks. Multi-dialogue act and -intent NLU Another degree of complexity in NLU is represented by the granularity of knowledge that can be extracted from an utterance. Utterance semantics is often rich and expressive: approximating meaning to a single user intent is often not enough to convey the required information. As opposed to the traditional single-dialogue act and single-intent view in previous work (Guo et al., 2014; Liu and Lane, 2016; Hakkani-Tur et al., 2016), HERMIT operates on a meaning representation that is multi-dialogue act and multi-intent. In fact, it is possible to model an utterance\u2019s meaning through multiple dialogue acts and intents at the same time. For example, the user would be able both to request tomorrow\u2019s weather and listen to his/her favourite music with just a single utterance. A further requirement is that for practical application the system should be competitive with stateof-the-art: we evaluate HERMIT\u2019s effectiveness by running several empirical investigations. We perform a robust test on a publicly available NLUBenchmark (NLU-BM) (Liu et al., 2019) containing 25K cross-domain utterances with a conversational agent. The results obtained show a performance higher than well-known off-the-shelf tools (i.e., Rasa, DialogueFlow, LUIS, and Watson). The contribution of the different network components is then highlighted through an ablation study. We also test HERMIT on the smaller Robotics-Oriented MUltitask Language UnderStanding (ROMULUS) corpus, annotated with Dialogue Acts and Frame Semantics. HERMIT produces promising results for the application in a real scenario. 2 Related Work Much research on Natural (or Spoken, depending on the input) Language Understanding has been carried out in the area of Spoken Dialogue Systems (Chen et al., 2017), where the advent of statistical learning has led to the application of many data-driven approaches (Lemon and Pietquin, 2012). In recent years, the rise of deep learning models has further improved the stateof-the-art. Recurrent Neural Networks (RNNs) have proven to be particularly successful, especially uniand bi-directional LSTMs and Gated Recurrent Units (GRUs). The use of such deep architectures has also fostered the development of joint classi\ufb01cation models of intents and slots. Bidirectional GRUs are applied in (Zhang and Wang, 2016), where the hidden state of each time step is used for slot tagging in a seq2seq fashion, while the \ufb01nal state of the GRU is used for intent classi\ufb01cation. The application of attention mechanisms in a BiLSTM architecture is investigated in (Liu and Lane, 2016), while the work of (Chen et al., 2016) explores the use of memory networks (Sukhbaatar et al., 2015) to exploit encoding of historical user utterances to improve the slot-\ufb01lling task. Seq2seq with self-attention is applied in (Li et al., 2018), where the classi\ufb01ed intent is also used to guide a special gated unit that contributes to the slot classi\ufb01cation of each token. One of the \ufb01rst attempts to jointly detect domains in addition to intent-slot tagging is the work of (Guo et al., 2014). An utterance syntax is encoded through a Recursive NN, and it is used to predict the joined domain-intent classes. Syntactic features extracted from the same network are used in the per-word slot classi\ufb01er. The work of (Hakkani-Tur et al., 2016) applies the same idea of (Zhang and Wang, 2016), this time using a context-augmented BiLSTM, and performing domain-intent classi\ufb01cation as a single joint task. As in (Chen et al., 2016), the history of user utterances is also considered in (Bapna et al., 2017), in combination with a dialogue context encoder. A two-layer hierarchical structure made of a combination of BiLSTM and BiGRU is used \ffor joint classi\ufb01cation of domains and intents, together with slot tagging. (Rastogi et al., 2018) apply multi-task learning to the dialogue domain. Dialogue state tracking, dialogue act and intent classi\ufb01cation, and slot tagging are jointly learned. Dialogue states and user utterances are encoded to provide hidden representations, which jointly affect all the other tasks. Many previous systems are trained and compared over the ATIS (Airline Travel Information Systems) dataset (Price, 1990), which covers only the \ufb02ight-booking domain. Some of them also use bigger, not publicly available datasets, which appear to be similar to the NLU-BM in terms of number of intents and slots, but they cover no more than three or four domains. Our work stands out for its more challenging NLU setting, since we are dealing with a higher number of domains/scenarios (18), intents (64) and slots (54) in the NLU-BM dataset, and dialogue acts (11), frames (58) and frame elements (84) in the ROMULUS dataset. Moreover, we propose a multitask hierarchical architecture, where each layer is trained to solve one of the three tasks. Each of these is tackled with a seq2seq classi\ufb01cation using a CRF output layer, as in (Sanh et al., 2018). The NLU problem has been studied also on the Interactive Robotics front, mostly to support basic dialogue systems, with few dialogue states and tailored for speci\ufb01c tasks, such as semantic mapping (Kruijff et al., 2007), navigation (Kollar et al., 2010; Bothe et al., 2018), or grounded language learning (Chai et al., 2016). However, the designed approaches, either based on formal languages or data-driven, have never been shown to scale to real world scenarios. The work of (Hatori et al., 2018) makes a step forward in this direction. Their model still deals with the single \u2018pick and place\u2019 domain, covering no more than two intents, but it is trained on several thousands of examples, making it able to manage more unstructured language. An attempt to manage a higher number of intents, as well as more variable language, is represented by the work of (Bastianelli et al., 2016) where the sole Frame Semantics is applied to represent user intents, with no Dialogue Acts. 3 Jointly parsing dialogue acts and frame-like structures The identi\ufb01cation of Dialogue Acts (henceforth DAs) is required to drive the dialogue manager to the next dialogue state. General frame structures (FRs) provide a reference framework to capture user intents, in terms of required or desired actions that a conversational agent has to perform. Depending on the level of abstraction required by an application, these can be interpreted as more domain-dependent paradigms like intent, or to shallower representations, such as semantic frames, as conceived in FrameNet (Baker et al., 1998). From this perspective, semantic frames represent a versatile abstraction that can be mapped over an agent\u2019s capabilities, allowing also the system to be easily extended with new functionalities without requiring the de\ufb01nition of new ad-hoc structures. Similarly, frame arguments (ARs) act as slots in a traditional intent-slots scheme, or to frame elements for semantic frames. In our work, the whole process of extracting a complete semantic interpretation as required by the system is tackled with a multi-task learning approach across DAs, FRs, and ARs. Each of these tasks is modelled as a seq2seq problem, where a task-speci\ufb01c label is assigned to each token of the sentence according to the IOB2 notation (Sang and Veenstra, 1999), with \u201cB-\u201d marking the Beginning of the chunk, \u201cI-\u201d the tokens Inside the chunk while \u201cO-\u201d is assigned to any token that does not belong to any chunk. Task labels are drawn from the set of classes de\ufb01ned for DAs, FRs, and ARs. Figure 1 shows an example of the tagging layers over the sentence Where can I \ufb01nd Starbucks?, where Frame Semantics has been selected as underlying reference theory. 3.1 Architecture description The central motivation behind the proposed architecture is that there is a dependence among the three tasks of identifying DAs, FRs, and ARs. The relationship between tagging frame and arguments appears more evident, as also developed in theories like Frame Semantics \u2013 although it is de\ufb01ned independently by each theory. However, some degree of dependence also holds between the DAs and FRs. For example, the FrameNet semantic frame Desiring, expressing a desire of the user for an event to occur, is more likely to be used in the context of an INFORM DA, which indicates the state of notifying the agent with an information, other than in an INSTRUCTION. This is clearly visible in interactions like \u201cI\u2019d like a cup of hot chocolate\u201d or \u201cI\u2019d like to \ufb01nd a shoe shop\u201d, where \fWhere can I \ufb01nd Starbucks ? DAs B-REQ INFO I-REQ INFO I-REQ INFO I-REQ INFO I-REQ INFO O FRs B-Locating I-Locating I-Locating I-Locating I-Locating O ARs O O B-COGNIZER B-LEXICAL UNIT B-ENTITY O Figure 1: Dialogue Acts (DAs), Frames (FRs \u2013 here semantic frames) and Arguments (ARs \u2013 here frame elements) IOB2 tagging for the sentence Where can I \ufb01nd Starbucks? the user is actually notifying the agent about a desire of hers/his. In order to re\ufb02ect such inter-task dependence, the classi\ufb01cation process is tackled here through a hierarchical multi-task learning approach. We designed a multi-layer neural network, whose architecture is shown in Figure 2, where each layer is trained to solve one of the three tasks, namely labelling dialogue acts (DA layer), semantic frames (FR layer), and frame elements (AR layer). The layers are arranged in a hierarchical structure that allows the information produced by earlier layers to be fed to downstream tasks. The network is mainly composed of three BiLSTM (Schuster and Paliwal, 1997) encoding layers. A sequence of input words is initially converted into an embedded representation through an ELMo embeddings layer (Peters et al., 2018), and is fed to the DA layer. The embedded representation is also passed over through shortcut connections (Hashimoto et al., 2017), and concatenated with both the outputs of the DA and FR layers. Self-attention layers (Zheng et al., 2018) are placed after the DA and FR BiLSTM encoders. Where wt is the input word at time step t of the sentence w = (w1, ..., wT ), the architecture can be formalised by: et = ELMo(wt), sDA t = BiLSTM(et) aDA t = SelfAtt(sDA t , sDA), sFR t = BiLSTM(et \u2295aDA t ), aFR t = SelfAtt(sFR t , sFR), sAR t = BiLSTM(et \u2295aFR t ) where \u2295represents the vector concatenation operator, et is the embedding of the word at time t, and sL = (sL 1 , ..., sL T ) is the embedded sequence output of each L layer, with L = {DA, FR, AR}. Given an input sentence, the \ufb01nal sequence of labels yL for each task is computed through a CRF tagging layer, which operates on the output of the DA and FR self-attention, and of the AR BiLSTM emO O B-COGNIZER Where can I + + B-REQ_INFO I-REQ_INFO I-REQ_INFO \u2026 B-Locating I-Locating I-Locating \u2026 + + + + + + \u2026 \u2026 Embeddings AR BiLSTM Encoder AR CRF DA CRF FR CRF FR BiLSTM Encoder DA BiLSTM Encoder Self-attention Self-attention Figure 2: HERMIT Network topology bedding, so that: yDA = CRF DA(aDA), yFR = CRF FR(aFR) yAR = CRF AR(sAR), where aDA, aFR are attended embedded sequences. Due to shortcut connections, layers in the upper levels of the architecture can rely both on direct word embeddings as well as the hidden representation aL t computed by a previous layer. Operationally, the latter carries task speci\ufb01c information which, combined with the input embeddings, helps in stabilising the classi\ufb01cation of each CRF layer, as shown by our experiments. The network is trained by minimising the sum of the individual negative log-likelihoods of the three CRF layers, while at test time the most likely sequence is obtained through the Viterbi decoding over the output scores of the CRF layer. 4 Experimental Evaluation In order to assess the effectiveness of the proposed architecture and compare against existing off-theshelf tools, we run several empirical evaluations. 4.1 Datasets We tested the system on two datasets, different in size and complexity of the addressed language. \fNLU-Benchmark dataset The \ufb01rst (publicly available) dataset, NLU-Benchmark (NLU-BM), contains 25, 716 utterances annotated with targeted Scenario, Action, and involved Entities. For example, \u201cschedule a call with Lisa on Monday morning\u201d is labelled to contain a calendar scenario, where the set event action is instantiated through the entities [event name: a call with Lisa] and [date: Monday morning]. The Intent is then obtained by concatenating scenario and action labels (e.g., calendar set event). This dataset consists of multiple home assistant task domains (e.g., scheduling, playing music), chit-chat, and commands to a robot (Liu et al., 2019).2 NLU-BM NLU-BM (reduced) Sentences 25715 11020 Sentences length 7.06 6.84 Scenario labels set 18 18 Action labels set 54 51 Intent labels set 68 64 Entity labels set 56 54 Number of intent 25715 11020 Number of entities 20597 9130 Intents/sentence 1 1 Entities/sentence 0.8 0.83 Table 1: Statistics of the NLU-Benchmark dataset (Liu et al., 2019). ROMULUS dataset The second dataset, ROMULUS, is composed of 1, 431 sentences, for each of which dialogue acts, semantic frames, and corresponding frame elements are provided. This dataset is being developed for modelling user utterances to open-domain conversational systems for robotic platforms that are expected to handle different interaction situations/patterns \u2013 e.g., chit-chat, command interpretation. The corpus is composed of different subsections, addressing heterogeneous linguistic phenomena, ranging from imperative instructions (e.g., \u201center the bedroom slowly, turn left and turn the lights off \u201d) to complex requests for information (e.g., \u201cgood morning I want to buy a new mobile phone is there any shop nearby?\u201d) or open-domain chit-chat (e.g., \u201cnope thanks let\u2019s talk about cinema\u201d). A considerable number of utterances in the dataset is collected through Human-Human Interaction studies in robotic domain (\u224870%), though a small portion has been synthetically generated for balancing the frame distribution. 2Available at https://github.com/xliuhw/ NLU-Evaluation-Data. ROMULUS dataset Sentences 1431 Sentences length 7.24 Dialogue act labels set 11 Frame labels set 58 Frame element labels set 84 Number of dialogue acts 1906 Number of frames 2013 Number of frame elements 5059 Dialogue act/sentence 1.33 Frames/sentence 1.41 Frame elements/sentence 3.54 Table 2: Statistics of the ROMULUS dataset. Note that while the NLU-BM is designed to have at most one intent per utterance, sentences are here tagged following the IOB2 sequence labelling scheme (see example of Figure 1), so that multiple dialogue acts, frames, and frame elements can be de\ufb01ned at the same time for the same utterance. For example, three dialogue acts are identi\ufb01ed within the sentence [good morning]OPENING [I want to buy a new mobile phone]INFORM [is there any shop nearby?]REQ INFO. As a result, though smaller, the ROMULUS dataset provides a richer representation of the sentence\u2019s semantics, making the tasks more complex and challenging. These observations are highlighted by the statistics in Table 2, that show an average number of dialogue acts, frames and frame elements always greater than 1 (i.e., 1.33, 1.41 and 3.54, respectively). 4.2 Experimental setup All the models are implemented with Keras (Chollet et al., 2015) and Tensor\ufb02ow (Abadi et al., 2015) as backend, and run on a Titan Xp. Experiments are performed in a 10-fold setting, using one fold for tuning and one for testing. However, since HERMIT is designed to operate on dialogue acts, semantic frames and frame elements, the best hyperparameters are obtained over the ROMULUS dataset via a grid search using early stopping, and are applied also to the NLU-BM models.3 This guarantees fairness towards other systems, that do not perform any \ufb01ne-tuning on the training data. We make use of pre-trained 1024-dim ELMo embeddings (Peters et al., 2018) as word vector representations without re-training the weights. 3Notice that in the NLU-BM experiments only the number of epochs is tuned, using 10% of the training data. \f4.3 Experiments on the NLU-Benchmark This section shows the results obtained on the NLU-Benchmark (NLU-BM) dataset provided by (Liu et al., 2019), by comparing HERMIT to off-the-shelf NLU services, namely: Rasa4, Dialog\ufb02ow5, LUIS6 and Watson7. In order to apply HERMIT to NLU-BM annotations, these have been aligned so that Scenarios are treated as DAs, Actions as FRs and Entities as ARs. To make our model comparable against other approaches, we reproduced the same folds as in (Liu et al., 2019), where a resized version of the original dataset is used. Table 1 shows some statistics of the NLU-BM and its reduced version. Moreover, micro-averaged Precision, Recall and F1 are computed following the original paper to assure consistency. TP, FP and FN of intent labels are obtained as in any other multi-class task. An entity is instead counted as TP if there is an overlap between the predicted and the gold span, and their labels match. Experimental results are reported in Table 3. The statistical signi\ufb01cance is evaluated through the Wilcoxon signed-rank test. When looking at the intent F1, HERMIT performs signi\ufb01cantly better than Rasa [Z = \u22122.701, p = .007] and LUIS [Z = \u22122.807, p = .005]. On the contrary, the improvements w.r.t. Dialog\ufb02ow [Z = \u22121.173, p = .241] do not seem to be signi\ufb01cant. This is probably due to the high variance obtained by Dialog\ufb02ow across the 10 folds. Watson is by a signi\ufb01cant margin the most accurate system in recognising intents [Z = \u22122.191, p = .028], especially due to its Precision score. The hierarchical multi-task architecture of HERMIT seems to contribute strongly to entity tagging accuracy. In fact, in this task it performs signi\ufb01cantly better than Rasa [Z = \u22122.803, p = .005], Dialog\ufb02ow [Z = \u22122.803, p = .005], LUIS [Z = \u22122.803, p = .005] and Watson [Z = \u22122.805, p = .005], with improvements from 7.08 to 35.92 of F1.9 4https://rasa.com/ 5https://dialogflow.com/ 6https://www.luis.ai/ 7https://www.ibm.com/watson 9Results for Watson are shown for the non-contextual training. Due to Watson limitations, i.e. 2000 training examples for contextual training, we could not run the whole test in such con\ufb01guration. For fairness, we report results made on 8 random samplings of 2000/1000 train/test examples a each (F1): Intent=72.64 \u00b1 7.46, Slots=77.01 \u00b1 10.65, Combined=74.85 \u00b1 7.54 Following (Liu et al., 2019), we then evaluated a metric that combines intent and entities, computed by simply summing up the two confusion matrices (Table 4). Results highlight the contribution of the entity tagging task, where HERMIT outperforms the other approaches. Paired-samples t-tests were conducted to compare the HERMIT combined F1 against the other systems. The statistical analysis shows a signi\ufb01cant improvement over Rasa [Z = \u22122.803, p = .005], Dialog\ufb02ow [Z = \u22122.803, p = .005], LUIS [Z = \u22122.803, p = .005] and Watson [Z = \u22122.803, p = .005]. 4.3.1 Ablation study In order to assess the contributions of the HERMIT\u2019s components, we performed an ablation study. The results are obtained on the NLU-BM, following the same setup as in Section 4.3. Results are shown in Table 5. The \ufb01rst row refers to the complete architecture, while \u2013SA shows the results of HERMIT without the selfattention mechanism. Then, from this latter we further remove shortcut connections (\u2013 SA/CN) and CRF taggers (\u2013 SA/CRF). The last row (\u2013 SA/CN/CRF) shows the results of a simple architecture, without self-attention, shortcuts, and CRF. Though not signi\ufb01cant, the contribution of the several architectural components can be observed. The contribution of self-attention is distributed across all the tasks, with a small inclination towards the upstream ones. This means that while the entity tagging task is mostly lexicon independent, it is easier to identify pivoting keywords for predicting the intent, e.g. the verb \u201cschedule\u201d triggering the calendar set event intent. The impact of shortcut connections is more evident on entity tagging. In fact, the effect provided by shortcut connections is that the information \ufb02owing throughout the hierarchical architecture allows higher layers to encode richer representations (i.e., original word embeddings + latent semantics from the previous task). Conversely, the presence of the CRF tagger affects mainly the lower levels of the hierarchical architecture. This is not probably due to their position in the hierarchy, but to the way the tasks have been designed. In fact, while the span of an entity is expected to cover few tokens, in intent recognition (i.e., a combination of Scenario and Action recognition) the span always covers all the tokens of an utterance. CRF therefore preserves consistency of IOB2 sequences structure. However, HERMIT seems to be the most stable ar\fIntent Entity P R F1 P R F1 Rasa 86.31\u00b11.07 86.31\u00b11.07 86.31\u00b11.07 85.93\u00b11.05 69.40\u00b11.66 76.78\u00b11.27 Dialog\ufb02ow 86.97\u00b12.02 85.87\u00b12.33 86.42\u00b12.18 78.21\u00b13.35 70.85\u00b14.70 74.30\u00b13.74 LUIS 85.53\u00b11.14 85.51\u00b11.15 85.52\u00b11.15 83.69\u00b11.31 72.46\u00b12.05 77.66\u00b11.45 Watson8 88.41\u00b10.68 88.08\u00b10.74 88.24\u00b10.70 35.39\u00b10.93 78.70\u00b12.01 48.82\u00b11.14 HERMIT 87.41\u00b10.63 87.70\u00b10.64 87.55\u00b10.63 87.65\u00b10.98 82.04\u00b12.12 84.74\u00b11.18 Table 3: Comparison of HERMIT with the results obtained in (Liu et al., 2019) for Intents and Entity Types. Combined P R F1 Rasa 86.16\u00b10.90 78.66\u00b11.28 82.24\u00b11.08 Dialog\ufb02ow 83.19\u00b12.43 79.07\u00b13.10 81.07\u00b12.64 LUIS 84.76\u00b10.67 79.61\u00b11.25 82.1\u00b10.90 Watson 54.02\u00b10.75 83.83\u00b11.02 65.7\u00b10.75 HERMIT 87.52\u00b10.61 85.03\u00b11.11 86.25\u00b10.66 Table 4: Comparison of HERMIT with the results in (Liu et al., 2019) by combining Intent and Entity. Intent Entity Combined HERMIT 87.55\u00b10.63 84.74\u00b11.18 86.25\u00b10.66 \u2013 SA 87.03\u00b10.74 84.35\u00b11.15 85.81\u00b10.81 \u2013 SA/CN 87.09\u00b10.78 82.43\u00b11.42 84.97\u00b10.72 \u2013 SA/CRF 83.57\u00b10.75 84.77\u00b11.06 84.09\u00b10.79 \u2013 SA/CN/CRF 83.78\u00b11.10 82.22\u00b11.41 83.10\u00b11.06 Table 5: Ablation study of HERMIT on the NLU-BM. chitecture, both in terms of standard deviation and task performance, with a good balance between intent and entity recognition. 4.4 Experiments on the ROMULUS dataset In this section we report the experiments performed on the ROMULUS dataset (Table 6). Together with the evaluation metrics used in (Liu et al., 2019), we report the span F1, computed using the CoNLL-2000 shared task evaluation script, and the Exact Match (EM) accuracy of the entire sequence of labels. It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions \u2013 e.g., a match is when all the three sequences are correct. Results in terms of EM re\ufb02ect the complexity of the different tasks, motivating their position within the hierarchy. Speci\ufb01cally, dialogue act identi\ufb01cation is the easiest task (89.31%) with respect to frame (82.60%) and frame element (79.73%), due to the shallow semantics it aims to catch. However, when looking at the span F1, its score (89.42%) is lower than the frame element identi\ufb01cation task (92.26%). What happens is that even though the label set is smaller, dialogue act spans are supposed to be longer than frame element ones, sometimes covering the whole sentence. Frame elements, instead, are often one or two tokens long, that contribute in increasing span based metrics. Frame identi\ufb01cation is the most complex task for several reasons. First, lots of frame spans are interlaced or even nested; this contributes to increasing the network entropy. Second, while the dialogue act label is highly related to syntactic structures, frame identi\ufb01cation is often subject to the inherent ambiguity of language (e.g., get can evoke both Commerce buy and Arriving). We also report the metrics in (Liu et al., 2019) for consistency. For dialogue act and frame tasks, scores provide just the extent to which the network is able to detect those labels. In fact, the metrics do not consider any span information, essential to solve and evaluate our tasks. However, the frame element scores are comparable to the benchmark, since the task is very similar. Overall, getting back to the combined EM accuracy, HERMIT seems to be promising, with the network being able to reproduce all the three gold sequences for almost 70% of the cases. The importance of this result provides an idea of the architecture behaviour over the entire pipeline. 4.5 Discussion The experimental evaluation reported in this section provides different insights. The proposed architecture addresses the problem of NLU in wide-coverage conversational systems, modelling semantics through multiple Dialogue Acts and Frame-like structures in an end-to-end fashion. In addition, its hierarchical structure, which re\ufb02ects the complexity of the single tasks, allows providing rich representations across the whole network. In this respect, we can af\ufb01rm that the architecture successfully tackles the multi-task problem, with results that are promising in terms of usability and applicability of the system in real scenarios. \fP R F1 span F1 EM Dialogue act 96.49\u00b10.98 95.95\u00b11.41 96.21\u00b11.13 89.42\u00b13.74 89.31\u00b13.28 Frame 95.26\u00b10.95 94.02\u00b11.20 94.64\u00b11.09 84.40\u00b12.99 82.60\u00b12.68 Frame element 95.62\u00b10.61 93.98\u00b10.76 94.79\u00b10.56 92.26\u00b11.22 79.73\u00b12.03 Combined 93.90\u00b10.89 92.95\u00b10.86 93.42\u00b10.83 \u2013 69.53\u00b12.50 Table 6: HERMIT performance over the ROMULUS dataset. P,R and F1 are evaluated following (Liu et al., 2019) metrics However, a thorough evaluation in the wild must be carried out, to assess to what extent the system is able to handle complex spoken language phenomena, such as repetitions, dis\ufb02uencies, etc. To this end, a real scenario evaluation may open new research directions, by addressing new tasks to be included in the multi-task architecture. This is supported by the scalable nature of the proposed approach. Moreover, following (Sanh et al., 2018), corpora providing different annotations can be exploited within the same multi-task network. We also empirically showed how the same architectural design could be applied to a dataset addressing similar problems. In fact, a comparison with off-the-shelf tools shows the bene\ufb01ts provided by the hierarchical structure, with better overall performance better than any current solution. An ablation study has been performed, assessing the contribution provided by the different components of the network. The results show how the shortcut connections help in the more \ufb01negrained tasks, successfully encoding richer representations. CRFs help when longer spans are being predicted, more present in the upstream tasks. Finally, the seq2seq design allowed obtaining a multi-label approach, enabling the identi\ufb01cation of multiple spans in the same utterance that might evoke different dialogue acts/frames. This represents a novelty for NLU in conversational systems, as such a problem has always been tackled as a single-intent detection. However, the seq2seq approach carries also some limitations, especially on the Frame Semantics side. In fact, label sequences are linear structures, not suitable for representing nested predicates, a tough and common problem in Natural Language. For example, in the sentence \u201cI want to buy a new mobile phone\u201d, the [to buy a new mobile phone] span represents both the DESIRED EVENT frame element of the Desiring frame and a Commerce buy frame at the same time. At the moment of writing, we are working on modeling nested predicates through the application of bilinear models. 5 Future Work We have started integrating a corpus of 5M sentences of real users chit-chatting with our conversational agent, though at the time of writing they represent only 16% of the current dataset. As already pointed out in Section 4.5, there are some limitations in the current approach that need to be addressed. First, we have to assess the network\u2019s capability in handling typical phenomena of spontaneous spoken language input, such as repetitions and dis\ufb02uencies (Shalyminov et al., 2018). This may open new research directions, by including new tasks to identify/remove any kind of noise from the spoken input. Second, the seq2seq scheme does not deal with nested predicates, a common aspect of Natural Language. To the best of our knowledge, there is no architecture that implements an end-to-end network for FrameNet based semantic parsing. Following previous work (Strubell et al., 2018), one of our future goals is to tackle such problems through hierarchical multitask architectures that rely on bilinear models. 6 Conclusion In this paper we presented HERMIT NLU, a hierarchical multi-task architecture for semantic parsing sentences for cross-domain spoken dialogue systems. The problem is addressed using a seq2seq model employing BiLSTM encoders and self-attention mechanisms and followed by CRF tagging layers. We evaluated HERMIT on a 25K sentences NLU-Benchmark and outperform state-of-the-art NLU tools such as Rasa, Dialog\ufb02ow, LUIS and Watson, even without speci\ufb01c \ufb01ne-tuning of the model. Acknowledgement This research was partially supported by the European Union\u2019s Horizon 2020 research and innovation programme under grant agreement No. 688147 (MuMMER project10). 10http://mummer-project.eu/"
+ }
+ ],
+ "Yonatan Bisk": [
+ {
+ "url": "http://arxiv.org/abs/1911.11641v1",
+ "title": "PIQA: Reasoning about Physical Commonsense in Natural Language",
+ "abstract": "To apply eyeshadow without a brush, should I use a cotton swab or a\ntoothpick? Questions requiring this kind of physical commonsense pose a\nchallenge to today's natural language understanding systems. While recent\npretrained models (such as BERT) have made progress on question answering over\nmore abstract domains - such as news articles and encyclopedia entries, where\ntext is plentiful - in more physical domains, text is inherently limited due to\nreporting bias. Can AI systems learn to reliably answer physical common-sense\nquestions without experiencing the physical world? In this paper, we introduce\nthe task of physical commonsense reasoning and a corresponding benchmark\ndataset Physical Interaction: Question Answering or PIQA. Though humans find\nthe dataset easy (95% accuracy), large pretrained models struggle (77%). We\nprovide analysis about the dimensions of knowledge that existing models lack,\nwhich offers significant opportunities for future research.",
+ "authors": "Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, Yejin Choi",
+ "published": "2019-11-26",
+ "updated": "2019-11-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "main_content": "Introduction Before children learn language, they already start forming categories and concepts based on the physical properties of objects around them (Hespos and Spelke 2004). This model of the world grows richer as they learn to speak, but already captures physical commonsense knowledge about everyday objects: their physical properties, affordances, and how they can be manipulated. This knowledge is critical for day-to-day human life, including tasks such as problem solving (what can I use as a pillow when camping?) and expressing needs and desires (bring me a harder pillow). Likewise, we hypothesize that modeling physical commonsense knowledge is a major challenge on the road to true AIcompleteness, including robots that interact with the world and understand natural language. Much of physical commonsense can be expressed in language, as the versatility of everyday objects and common concepts eludes other label schemes. However, due to issues of reporting bias, these commonsense properties facts like \u2018it is a bad idea to apply eyeshadow with a toothpick\u2019 are rarely directly reported. Although much recent progress Copyright c \u20dd2020, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. a. Squeeze the water bottle and press it against the yolk. Release, which creates suction and lifts the yolk. To separate egg whites from the yolk using a water bottle, you should\u2026 b. Place the water bottle and press it against the yolk. Keep pushing, which creates suction and lifts the yolk. ??? a! Figure 1: PIQA : Given a physical goal expressed in natural language, like \u2018to separate egg whites...,\u2019 a model must choose the most sensible solution. Our dataset tests the ability of natural language understanding models to link text to a robust intuitive-physics model of the world. Here, humans easily pick answer a) because separating the egg requires pulling the yolk out, while machines are easily fooled. has been made in Natural Language Processing through a shift towards large-scale pretrained representations from unlabeled text (Radford et al. 2018; Devlin et al. 2019; Liu et al. 2019), the bulk of the success of this paradigm has been on core abstract tasks and domains. State-of-theart models can reliably answer questions given an encyclopedia article (Rajpurkar et al. 2016) or recognize named entities (Tjong Kim Sang and De Meulder 2003), but it is not clear whether they can robustly answer questions that require physical commonsense knowledge. To study this question and begin bridging the representational gap, we introduce Physical Interaction: Question Answering, or PIQA to evaluate language representations on their knowledge of physical commonsense. We focus on everyday situations with a preference for atypical solutions. Our dataset is inspired by instructables.com, which provides users with instructions on how to build, craft, arXiv:1911.11641v1 [cs.CL] 26 Nov 2019 \fa. Shape, Material, and Purpose [Goal] Make an outdoor pillow [Sol1] Blow into a tin can and tie with rubber band \u0018 [Sol2] Blow into a trash bag and tie with rubber band \u0014 [Goal] To make a hard shelled taco, [Sol1] put seasoned beef, cheese, and lettuce onto the hard shell. \u0018 [Sol2] put seasoned beef, cheese, and lettuce into the hard shell. \u0014 [Goal] How do I \ufb01nd something I lost on the carpet? [Sol1] Put a solid seal on the end of your vacuum and turn it on. \u0018 [Sol2] Put a hair net on the end of your vacuum and turn it on. \u0014 b. Commonsense Convenience [Goal] How to make sure all the clocks in the house are set accurately? [Sol1] Get a solar clock for a reference and place it just outside a window that gets lots of sun. Use a system of call and response once a month, having one person stationed at the solar clock who yells out the correct time and have another person move to each of the indoor clocks to check if they are showing the right time. Adjust as necessary. \u0018 [Sol2] Replace all wind-ups with digital clocks. That way, you set them once, and that\u2019s it. Check the batteries once a year or if you notice anything looks a little off. \u0014 Figure 2: PIQA covers a broad array of phenomena. Above are two categories of example QA pairs. Left are examples that require knowledge of basic properties of the objects (\ufb02exibility, curvature, and being porous), while on the Right both answers may be technically correct but one is more convenient and preferable. bake, or manipulate objects using everyday materials. We asked annotators to provide semantic perturbations or alternative approaches which are otherwise syntactically and topically similar to ensure physical knowledge is targeted. The dataset is further cleaned of basic artifacts using the AFLite algorithm introduced in (Sakaguchi et al. 2020; Sap et al. 2019) which is an improvement on adversarial \ufb01ltering (Zellers et al. 2018; Zellers et al. 2019b). Throughout this work we \ufb01rst detail the construction of our new benchmark for physical commonsense. Second, we show that popular approaches to large-scale language pretraining, while highly successful on many abstract tasks, fall short when a physical model of the world is required. Finally, our goal is to elicit further research into building language representations that capture details of the real world. To these ends, we perform error and corpora analyses to provide insights for future work. Dataset We introduce a new dataset, PIQA , for benchmarking progress in physical commonsense understanding. The underlying task is multiple choice question answering: given a question q and two possible solutions s1, s2, a model or a human must choose the most appropriate solution, of which exactly one is correct. We collect data with how-to instructions as a scaffold, and use state-of-the-art approaches for handling spurious biases, which we will discuss below. Instructables as a source of physical commonsense Our goal is to construct a resource that requires concrete physical reasoning. To achieve this, we provide a prompt to the annotators derived from instructables.com. The instructables website is a crowdsourced collection of instructions for doing everything from cooking to car repair. In most cases, users provide images or videos detailing each step and a list of tools that will be required. Most goals are simultaneously rare and unsurprising. While an annotator is unlikely to have built a UV-Flourescent steampunk lamp or made a backpack out of duct tape, it is not surprising that someone interested in home crafting would create these, nor will the tools and materials be unfamiliar to the average person. Using these examples as the seed for their annotation, helps remind annotators about the less prototypical uses of everyday objects. Second, and equally important, is that instructions build on one another. This means that any QA pair inspired by an instructable is more likely to explicitly state assumptions about what preconditions need to be met to start the task and what postconditions de\ufb01ne success. Collecting data through goal-solution pairs Unlike traditional QA tasks, we de\ufb01ne our dataset in terms of Goal and Solution pairs (see Figure 2 for example GoalSolution pairs and types of physical reasoning). The Goal in most cases can be viewed as indicating a post-condition and the solutions indicate the procedure for accomplishing this. The more detailed the goal, the easier it is for annotators to write both correct and incorrect solutions. As noted above, the second component of our annotation design is reminding people to think creatively. We initially experimented with asking annotators for (task, tool) pairs via unconstrained prompts, but found that reporting bias swamped the dataset. In particular, when thinking about how to achieve a goal, people most often are drawn to prototypical solutions and look for tools in the kitchen (e.g. forks and knives) or the garage (e.g. hammers and drills). They rarely considered the literal hundreds of other everyday objects that might be in their own homes (e.g. sidewalk chalk, shower curtains, etc). To address this, and \ufb02atten the distribution of referenced objects (see Figure 5), we prompt the annotations with links to instructables. Speci\ufb01cally, annotators were asked to glance at the instructions of an instructable and pull out or have it inspire them to construct two component tasks. They would then articulate the goal (often centered on atypical materials) and how to achieve it. In addition, we asked them to provide a permutation to their own solution which makes it invalid, often subtly (Figure 3). To further assist diversity \fFigure 3: In the HIT design the instructable provides inspiration to think out-of-the-box (1 Sock, 3 Products) and annotators are asked for 1. a physical goal, 2. a valid solution, and 3. a trick. The trick should sound reasonable, but be wrong often due to a subtle misunderstanding of preconditions or physics. Additional HITs (not shown) were run for quali\ufb01cation prior to this stage and validation afterwards.2 we seed annotators with instructables drawn from six categories (costume, outside, craft, home, food, and workshop). We asked that two examples be drawn per instructable to encourage one of them to come later in the process and require precise articulation of pre-conditions. During validation, examples with low agreement were removed from the data. This often meant that correct examples were removed that required expert level knowledge of a domain (e.g. special woodworking terminology) which should not fall under the umbrella of \u201ccommonsense.\u201d Because, we focus on human generated tricks, annotators were free to come up with clever ways to hide deception. Often, this meant making very subtle changes to the solution to render it incorrect. In these cases, the two solutions may differ by as little as one word. We found that annotations used both simple linguistic tricks (e.g. negation and numerical changes) and often swapped a key action or item for another that was topically similar but not helpful for completing the given goal. For this reason, our interface also includes a diff button which highlights where the solutions differ. This improved annotator accuracy and speed substantially. Annotator pay averaged > 15$/hr according to both self-reporting on turkerview.com and our timing calculations. 2In addition to this design, we also include a quali\ufb01cation HIT which contained well constructed and underspeci\ufb01ed (goal, solution) pairs. Annotators had to successfully (>80%) identify which were well formed to participate in the main HIT. Data was collected in batches of several thousand triples and validated by other annotators for correctness. Users will low agreement were de-qualed. 0 20 40 60 80 100 Solution Length 0 250 500 750 1000 1250 1500 1750 Frequency Correct Trick Figure 4: Sentence length distributions for both correct solutions and tricks are nearly identical across the training set. Statistics In total our dataset is comprised of over 16,000 training QA pairs with an additional \u223c2K and \u223c3k held out for development and testing, respectively. Our goals, as tokenized by Spacy,3 average 7.8 words and both correct and incorrect solutions average 21.3 words. In total, this leads to over 3.7 million lexical tokens in the training data. Figure 4 shows a plot of the correct and incorrect sequence lengths (as tokenized by the GPT BPE tokenizer), with the longest 1% of the data removed. While there are minor differences, the two distributions are nearly identical. We also analyzed the overlap in the vocabulary and \ufb01nd that in all cases (noun, verb, adjective, and adverb) we see at least an 85% overlap between words used in correct and incorrect solutions. In total we have 6,881 unique nouns, 2,493 verbs, 2,263 adjectives, and 604 adverbs in the training data.. The most common of each are plotted in Figure 5 alongside their cumulative distributions. Again, this helps verify that the dataset revolves very heavily around physical phenomena, properties, and manipulations. For example, the top adjectives include state (dry, clean, hot) and shape (small, sharp, \ufb02at); adverbs include temporal conditions (then, when) and manner (quickly, carefully, completely). These properties often differentiate correct from incorrect answers, as shown in examples throughout the paper. We also color words according to their concreteness score (Brysbaert, Warriner, and Kuperman 2014), though many \u201cabstract\u201d words have concrete realizations in our dataset. Removing Annotation Artifacts As noted previously, we use AFLite (Sakaguchi et al. 2020) to remove stylistic artifacts and trivial examples from the data, which have been shown to arti\ufb01cially in\ufb02ate model performance on previous NLI benchmarks (Poliak et al. 2018; Gururangan et al. 2018). The AFLite algorithm performs a systematic data bias reduction: it discards instances whose given feature representations are collectively highly indicative of the target label. In practice, we use 5,000 examples from the original dataset to \ufb01ne-tune BERT-Large for this task and compute the corresponding embeddings of all remaining instances. AFLite uses an ensemble of linear classi\ufb01ers trained on random subsets of the data to determine whether these pre-computed embeddings are strong 3https://spacy.io \u2013 all data was collected in English. \fwater cup paper place top bowl oil wood butter sugar salt heat piece bag knife pan side ice paint mixture bottle cream food bottom end milk tape hair hand chocolate dough glue way egg pot oven half glass hole sheet bread time car soap surface vinegar towel soda box pepper brush metal jar nail teaspoon container cloth air coffee juice area sauce wire cheese line skin fabric ball wall spoon lemon board ground flour 0 50 100 NOUN use make add cut place put remove take keep let hold turn pour used clean need mix fill apply dry stir cover cook want set rub go find pull open chopped prevent allow baking start press leave making attach sit run wash create fold spray rinse desired wipe wrap soak roll bring store see look click push hang cutting bake help spread prepare lay give wear eat serve sprinkle heat paint made tie cool 0 50 100 VERB how then when together about where just back gently easily first as away inside well quickly down so more once carefully completely also very too enough only again overnight instead lightly at thoroughly still even in slowly outside around slightly long evenly firmly there all directly right next now apart simply over immediately out least upside tightly up longer much properly usually on fully always often through safely approximately straight constantly most forth finely 0 50 100 ADV small hot large other sure few dry old clean plastic white warm cold more medium wooden high fresh best wet little new good cool full red black open flat soft low long thick heavy same easy light empty sharp first such right top green thin several excess possible different smooth whole clear brown free own many hard homemade olive extra ready frozen dark much easier equal deep enough sweet safe smaller simple necessary loose 0 50 100 ADJ Figure 5: Here we show the frequency distributions for the top seventy-\ufb01ve words tagged by Spacy as noun, verb, adverb or adjective. We see that the vast majority of concepts focus on physical properties (e.g. small, hot, plastic, wooden) and how objects can be manipulated (e.g. cut, cover, soak, push). Additionally, we see strongly zip\ufb01an behavior in all tags but the adverbs. Words are colored by the average concreteness scores presented by (Brysbaert, Warriner, and Kuperman 2014). indicators of the correct answer option. Instead of having to speci\ufb01cally identify the possible sources of biases, this approach enables unsupervised data bias reduction by relying on state-of-the-art methods to uncover undesirable annotation artifacts. For more information about AFLite, please refer to (Sakaguchi et al. 2020). Experiments In this section, we test the performance of state-of-theart natural language understanding models on our dataset, PIQA. In particular, we consider the following three largescale pretrained transformer models: a. GPT (Radford et al. 2018) is a model that processes text left-to-right, and was pretrained using a language modeling objective. We use the original 124M parameter GPT model. b. BERT (Devlin et al. 2019) is a model that process text bidirectionally, and thus was pretrained using a special masked language modeling objective. We use BERT-Large with 340M parameters. c. RoBERTa (Liu et al. 2019) is a version of the BERT model that was made to be signi\ufb01cantly more robust through pretraining on more data and careful validation of the pretraining hyperparameters. We use RoBERTa-Large, which has 355M parameters. We follow standard best practices in adapting these models for two-way classi\ufb01cation. We consider the two solution choices independently: for each choice, the model is provided the goal, the solution choice, and a special [CLS] token. At the \ufb01nal layer of the transformer, we extract the hidden states corresponding to the positions of each [CLS] token. We apply a linear transformation to each hidden state and apply a softmax over the two options: this approximates the probability that the correct solution is option A or B. During \ufb01netuning, we train the model using a cross-entropy loss over the two options. For GPT, we follow the original implementation and include an additional language modeling loss, which improved training stability. Generally, we found that \ufb01netuning was often unstable with some hyperparameter con\ufb01gurations leading to validation performance around chance, particularly for BERT. We follow best practices in using a grid search over learning rates, batch sizes, and the number of training epochs for each model, and report the best-scoring con\ufb01guration as was found on the validation set. For all models and experiments, \fAccuracy (%) Model Size Validation Test Random Chance 50.0 50.0 Majority Class 50.5 50.4 OpenAI GPT 124M 70.9 69.2 Google BERT 340M 67.1 66.8 FAIR RoBERTa 355M 79.2 77.1 Human 94.9 Table 1: Results of state-of-the-art natural language understanding models on PIQA, compared with human performance. The results show a signi\ufb01cant gap between model and human performance, of roughly 20 absolute points. we used the transformers library and truncated examples at 150 tokens, which affects 1% of the data. Manual inspection of the development errors show that some \u201cmistakes\u201d are actually correct but required a websearch to verify. Human performance was calculated by a majority vote. Annotators were chosen to participate that achieved \u226590% on the quali\ufb01cation HIT from before. It is therefore, completely reasonable that automated methods trained on large web crawls may eventually surpass human performance here. Human evaluation was performed on development data, and the train, development, and test folds were automatically produced by AFLite. Results We present our results in Table 1. As the dataset was constructed to be adversarial to BERT, it is not surprising that it performs the worst of three models despite generally outperforming GPT on most other benchmarks. Comparing GPT and RoBERTa we see that despite more training data, a larger vocabulary, twice the number of parameters and careful construction of robust training, there is only a 8pt performance gain and RoBERTa still falls roughly 18 points short of human performance on this task. As noted throughout, exploring this gap is precisely the purpose for PIQA existing and which facets of the dataset fool RoBERTa is the focus of the remainder of this paper. Analysis In this section, we unpack the results of state-of-the-art models on PIQA. In particular, we take a look at the errors made by the top-performing model RoBERTa, as a view towards the physical commonsense knowledge that can be learned through language alone. PIQA as a diagnostic for physical understanding The setup of PIQA allows us to use it to probe the inner workings of deep pretrained language models, and to determine the extent of their physical knowledge. In this way, our dataset can augment prior work on studying to what extent models such as BERT understand syntax (Goldberg 2019). However, while syntax is a well studied problem within linguistics, physical commonsense does not have as rich a lit0 20 40 60 80 100 % Examples with edit distance d Split Validation Training 10 20 30 40 50 Minimum edit distance d between solution 1 and 2 50 60 70 80 90 100 RoBERTa accuracy over examples d Figure 6: Breaking down PIQA by edit distance between solution choices. Top: Cumulative histogram of examples in the validation and training sets, in terms of minimum edit distance d between the two solution choices. The majority of the dataset consists of small tweaks between the two solution pairs; nevertheless, this is enough to confuse state-ofthe-art NLP models. Bottom: RoBERTa accuracy over validation examples with a minimum edit distance of d. Dataset dif\ufb01culty increases somewhat as the two solution pairs are allowed to drift further apart. erature to borrow from, making its dimensions challenging to pin down. Simple concepts. Understanding the physical world requires a deep understanding of simple concepts, such as \u201cwater\u201d or \u201cketchup,\u201d and their affordances and interactions with respect to other concepts. Though our dataset covers interactions between and with common objects, we can analyze the space of concepts in the dataset by performing a string alignment between solution pairs. Two solution choices that differ by editing a single phrase must by de\ufb01nition test the commonsense understanding of that phrase. In Figure 6 we show the distribution of the edit distance between solution choices. We compute edit distance over tokenized and lowercased strings with punctuation removed. We use a cost of 1 for edits, insertions, and deletions. Most of the dataset covers simple edits between the two solution choices: roughly 60% of the dataset in both validation and training involves a 1-2 word edit between solutions. In the bottom of Figure 6, we show that the dataset complexity \f0 25 50 75 100 Validation accuracy over examples that differ by the single word w water top paper bottom after before glue on cold spoon dry milk over 0 100 200 300 # of dataset examples that differ by the given single word w Split Validation Training Figure 7: Common concepts as a window to RoBERTa\u2019s understanding of the physical world. We consider validation examples (q, s1, s2) wherein s1 and s2 differ from each other by a given word w. Left, we show the validation accuracy for common words w, while the number of dataset examples are shown right. Though certain concepts such as water occur quite frequently, RoBERTa nevertheless \ufb01nds those concepts dif\ufb01cult, with 75% accuracy. Additionally, on common relations such as \u2018cold\u2019, \u2018on\u2019, \u2018before\u2019, and \u2018after\u2019 RoBERTa performs roughly at chance. generally increases with the edit distance between the solution pairs. Nevertheless, the head of the distribution represents a space that is simple to study. Single-word edits. In Figure 7, we plot the accuracy of RoBERTa among dataset examples that differ by a single word. More formally, we consider examples (q, s1, s2) whereby moving from s1 to s2, or vice versa, requires editing a given word w.4 We show examples of words w that occur frequently in both the training and validation splits of the dataset, which allows RoBERTa to re\ufb01ne representations of these concepts during training and gives us a large enough sample size to reliably estimate model performance. As shown, RoBERTa struggles to understand certain highly \ufb02exible relations. In particular, Figure 7 highlights the dif\ufb01culty of correctly answering questions that differ by the words \u2018before,\u2019 \u2018after\u2019, \u2018top\u2018, and \u2018bottom\u2019: RoBERTa performs nearly at chance when encountering these. Interestingly, the concepts shown in Figure 7 suggest that RoBERTa also struggles to understand many common, more versatile, physical concepts. Though there are 300 training examples wherein the solution choices s1, s2 differ by the word \u2018water.\u2019 RoBERTa performs worse than average on these replacements. On the other hand, RoBERTa does much better at certain nouns, such as \u2018spoon.\u2019 Common replacements in PIQA. We dig into this 4We additionally allow for an additional insertion; this helps to capture simple phrases like going from \u2018water\u2019 to \u2018olive oil.\u2019 Nevertheless, these multiword expressions tend to be less common, which is why we omit them in Figure 7. 0 100 200 Count milk oil soda vinegar flour olive oil alcohol butter air other Most common replacements for... water 0 20 40 Count fork knife toothpick spatula whisk bowl scalpel shovel screwdriver other spoon 0 5 10 Count boil heat cook microwave thaw burn eat refrigerate heat up other freeze Figure 8: The most common replacements for three selected words: \u2018water,\u2019 \u2018spoon,\u2019 and \u2018freeze.\u2019 These cover several key dimensions: \u2018water\u2019 is a broad noun with many properties and affordances, whereas \u2018spoons\u2019 are much narrower in scope. Perhaps as a result, RoBERTa performs much butter at examples where \u2018spoon\u2019 is the pivot word (90%) versus \u2018water\u2019 (75%). Freeze has an accuracy of 66% on the validation set, and shows that verbs are challenging as well. further in Figure 8, where we showcase the most common replacements for three examples: \u2018water,\u2019 \u2018spoon,\u2019 and \u2018freeze.\u2019 While \u2018water\u2019 is prevalent in the training set, it is also highly versatile. One can try to substitute it with a variety of different household items, such as \u2018milk\u2019 or \u2018alcohol,\u2019 often to disastrous effects. However, \u2018spoons\u2019 have fewer challenging properties. A spoon cannot generally be substituted with a utensil that is sharp or has prongs, such as a fork, a knife, or a toothpick. RoBERTa obtains high accuracy on \u2018spoon\u2019 examples, which suggests that it might understand this simple affordance, but does not capture the long tail of affordances associated with \u2018water.\u2019 Qualitative results Our analysis thus far has been on simple-to-analyze single word expressions, where we have shown that the stateof-the-art language model, RoBERTa, struggles at a nuanced understanding of key commonsense concepts, such as relations. To further probe the knowledge gap of these strong models, we present qualitative examples in Figure 9. The examples are broadly representative of larger patterns: RoBERTa can recognize clearly ridiculous generations (Figure 9, top left) and understands differences between some commonsense concepts (bottom left). It\u2019s important to note, that in both cases the correct answer is prototypical and something we might expect the models to have seen before. However, it struggles to tell the difference between sub\fCorrect examples [Goal] Best way to pierce ears. [Sol1] It is best to go to a professional to get your ear pierced to avoid medical problems later. \u0014 [Sol2] The best way to pierce your ears would be to insert a needle half inch thick into the spot you want pierced. \u0018 [Goal] How do you reduce wear and tear on the nonstick \ufb01nish of muf\ufb01n pans? [Sol1] Make sure you use paper liners to protect the nonstick \ufb01nish when baking muf\ufb01ns and cupcakes in muf\ufb01n pans. \u0014 [Sol2] Make sure you use grease and \ufb02our to protect the nonstick \ufb01nish when baking muf\ufb01ns and cupcakes in muf\ufb01n pans. \u0018 Incorrect examples [Goal] How can I quickly and easily remove strawberry stems? [Sol1] Take a straw and from the top of the strawberry push the straw through the center of the strawberry until the stem pops off. \u0018 [Sol2] Take a straw and from the bottom of the strawberry push the straw through the center of the strawberry until the stem pops off. \u0014 [Goal] how to add feet to a coaster. [Sol1] cut four slices from a glue stick, and attatch to the coaster with glue. \u0014 [Sol2] place a board under the coaster, and secure with zip ties and a glue gun. \u0018 Figure 9: Qualitative analysis of RoBERTa\u2019s predictions with. Left: Two examples that RoBERTa gets right. Right: two examples that RoBERTa gets incorrect. Short phrases that differ between solution 1 and solution 2 are shown in bold and italics. tle relations such as top and bottom (top right of Figure 9). Moreover, it struggles with identifying non-prototypical situations (bottom right). Though using a gluestick as feet for a coaster is uncommon, to a human familiar with these concepts we can visualize the action and its result to verify that the goal has been achieved. Overall, these examples suggest that physical understanding \u2013 particularly involving novel combinations of common objects \u2013 challenges models that were pretrained on text only. Related Work Physical understanding is broad domain that touches on everything from scienti\ufb01c knowledge (Schoenick et al. 2016) to the interactive acquisition of knowledge by embodied agents (Thomason et al. 2016). To this end, work related to the goals of our benchmark span the NLP, Computer Vision and Robotics communities. Language. Within NLP, in addition to large scale models, there has also been progress on reasoning about cause and effect effects/implications within these models (Bosselut et al. 2019), extracting knowledge from them (Petroni et al. 2019), and investigating where large scale language models fail to capture knowledge of tools and elided procedural knowledge in recipes (Bisk et al. 2019). The notion of procedural knowledge and instruction following is a more general related task within vision and robotics. From text alone, work has shown that much can be understood about the implied physical situations of verb usage (Forbes and Choi 2017) and relative sizes of objects (Elazar et al. 2019). Vision. Physical knowledge can be discovered and evaluated within the visual world. Research has studied predicting visual relationships in images (Krishna et al. 2016) and as well as actions and their dependent objects (Yatskar, Zettlemoyer, and Farhadi 2016). Relatedly, the recent HAKE dataset (Li et al. 2019) speci\ufb01cally annotates which object/body-parts are essential to completing or de\ufb01ning an action. Image data also allows for studying the concreteness of nouns and provides a natural path forward for further investigation (Hessel, Mimno, and Lee 2018). Related to physical commonsense, research in visual commonsense has studied intuitive physics (Wu et al. 2017), cause-effect relationships (Mottaghi et al. 2016), and what can be reasonably inferred beyond a single image (Zellers et al. 2019a). Robotics. Learning from interaction and intuitive physics (Agrawal et al. 2016) can also be encoded as priors when exploring the world (Byravan et al. 2018) and internal models of physics, shape, and material strength enable advances in tool usage (Toussaint et al. 2018) or construction (Nair, Balloch, and Chernova 2019). Key to our research aims in this work is helping to build language tools which capture enough physical knowledge to speed up the bootstrapping of robotic-language applications. Language tools should provide strong initial priors for learning (Tellex et al. 2011; Matuszek 2018) that are then re\ufb01ned through interaction and dialogue (Gao et al. 2016). Conclusion We have evaluated against large-scale pretrained models as they are in vogue as the de facto standard of progress within NLP, but are primarily interested in their performance and failings as a mechanism for advancing the position that learning about the world from language alone, is limiting. Future research, may \u201cmatch\u201d humans on our dataset by \ufb01nding a large source of in-domain data and \ufb01ne-tuning heavily, but this is very much not the point. Philosophically, knowledge should be learned from interaction with the world to eventually be communicated with language. In this work we introduce the Physical Interaction: Question Answering or PIQA benchmark for evaluating and studying physical commonsense understanding in natural language models. We \ufb01nd the best available pretrained models lack an understanding of some of the most basic physical properties of the world around us. Our goal with PIQA is to provide insight and a benchmark for progress towards language representations that capture knowledge traditionally only seen or experienced, to enable the construction of language models useful beyond the NLP community. \fAcknowledgements We thank the anonymous reviewers for their insightful suggestions. This research was supported in part by NSF (IIS1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), DARPA under the MCS program through NIWC Paci\ufb01c (N66001-19-2-4031), and the NSF-GRFP No. DGE-1256082. Computations on beaker.org were supported in part by Google Cloud."
+ },
+ {
+ "url": "http://arxiv.org/abs/1609.09405v2",
+ "title": "Evaluating Induced CCG Parsers on Grounded Semantic Parsing",
+ "abstract": "We compare the effectiveness of four different syntactic CCG parsers for a\nsemantic slot-filling task to explore how much syntactic supervision is\nrequired for downstream semantic analysis. This extrinsic, task-based\nevaluation provides a unique window to explore the strengths and weaknesses of\nsemantics captured by unsupervised grammar induction systems. We release a new\nFreebase semantic parsing dataset called SPADES (Semantic PArsing of\nDEclarative Sentences) containing 93K cloze-style questions paired with\nanswers. We evaluate all our models on this dataset. Our code and data are\navailable at https://github.com/sivareddyg/graph-parser.",
+ "authors": "Yonatan Bisk, Siva Reddy, John Blitzer, Julia Hockenmaier, Mark Steedman",
+ "published": "2016-09-29",
+ "updated": "2017-01-31",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "main_content": "Introduction The past several years have seen signi\ufb01cant progress in unsupervised grammar induction (Carroll and Charniak, 1992; Yuret, 1998; Klein and Manning, 2004; Spitkovsky et al., 2010; Garrette et al., 2015; Bisk and Hockenmaier, 2015). But how useful are unsupervised syntactic parsers for downstream NLP tasks? What phenomena are they able to capture, and where would additional annotation be required? Instead of standard intrinsic evaluations \u2013 attachment scores that depend strongly on the particular annotation styles of the gold treebank \u2013 we examine the utility of unsupervised and weakly supervised parsers for semantics. We perform an extrinsic evaluation of unsupervised and weakly supervised CCG parsers on \u2217Equal contribution a grounded semantic parsing task that will shed light on the extent to which these systems recover semantic information. We focus on English to perform a direct comparison with supervised parsers (although unsupervised or weakly supervised approaches are likely to be most bene\ufb01cial for domains or languages where supervised parsers are not available). Speci\ufb01cally, we evaluate different parsing scenarios with varying amounts of supervision. These are designed to shed light on the question of how well syntactic knowledge correlates with performance on a semantic evaluation. We evaluate the following scenarios (all of which assume POS-tagged input): 1) no supervision; 2) a lexicon containing words mapped to CCG categories; 3) a lexicon containing POS tags mapped to CCG categories; 4) sentences annotated with CCG derivations (i.e., fully supervised). Our evaluation reveals which constructions are problematic for unsupervised parsers (and annotation efforts should focus on). Our results indicate that unsupervised syntax is useful for semantics, while a simple semi-supervised parser outperforms a fully unsupervised approach, and could hence be a viable option for low resource languages. 2 CCG Intrinsic Evaluations CCG (Steedman, 2000) is a lexicalized formalism in which words are assigned syntactic types, also known as supertags, encoding subcategorization information. Consider the sentence Google acquired Nest in 2014, and its CCG derivations shown in Figure 1. In (a) and (b), the supertag of acquired, (S\\NP)/NP, indicates that it has two arguments, and the prepositional phrase in 2014 is an adjunct, whereas in (c) the arXiv:1609.09405v2 [cs.CL] 31 Jan 2017 \fGoogle acquired Nest in 2014 NP (S\\NP)/NP NP ((S\\NP)\\(S\\NP))/NP NP > > S\\NP (S\\NP)\\(S\\NP) < S\\NP < S (a) in 2014 modi\ufb01es acquired Nest Google acquired Nest in 2014 NP (S\\NP)/NP NP (S\\S)/NP NP > > S\\NP S\\S < S < S (b) in 2014 modi\ufb01es Google acquired Nest Google acquired Nest in 2014 NP ((S\\NP)/PP)/NP NP PP/NP NP > > (S\\NP)/PP PP > S\\NP < S (c) acquired Google takes the argument in 2014 Figure 1: Example of multiple valid derivations that can be grounded to the same Freebase logical form (Eq. 1) even though they differ dramatically in performance under parsing metrics (5, 4, or 3 \u201ccorrect\u201d supertags). supertag ((S\\NP)/PP)/NP indicates acquired has three arguments including the prepositional phrase. In (a) and (b), depending on the supertag of in, the derivation differs. When trained on labeled treebanks, (a) is preferred. However note that all these derivations could lead to the same semantics (e.g., to the logical form in Equation 1). Without syntactic supervision, there may not be any reason for the parser to prefer one analysis over the other. One procedure to evaluate unsupervised induction methods has been to compare the assigned supertags to treebanked supertags, but this evaluation does not consider that multiple derivations could lead to the same semantics. This problem is also not solved by evaluating syntactic dependencies. Moreover, while many dependency standards agree on the head direction of simple constituents (e.g., noun phrases) they disagree on the most semantically useful ones (e.g., coordination and relative clauses).1 3 Our Proposed Evaluation The above syntax-based evaluation metrics conceal the real performance differences and their effect on downstream tasks. Here we propose an extrinsic evaluation where we evaluate our ability to convert sentences to Freebase logical forms starting via CCG derivations. Our motivation is that most sentences can only have a single realization in Freebase, and any derivation that could lead to this realization is potentially a correct derivation. For example, the Freebase logical form for the example sentence in Figure 1 is shown below, and none of its derivations are penalized if they could result in this logical form. \u03bbe. business.acquisition(e) \u2227acquiring company(e, GOOGLE) \u2227company acquired(e, NEST) \u2227date(e, 2014) (1) 1Please see Bisk and Hockenmaier (2013) for more details. Since grammar induction systems are traditionally trained on declarative sentences, we would ideally require declarative sentences paired with Freebase logical forms. But such datasets do not exist in the Freebase semantic parsing literature (Cai and Yates, 2013; Berant et al., 2013). To alleviate this problem, and yet perform Freebase semantic parsing, we propose an entity slot-\ufb01lling task. Entity Slot-Filling Task. Given a declarative sentence containing mentions of Freebase entities, we randomly remove one of the mentions to create a blank slot. The task is to \ufb01ll this slot by translating the declarative sentence into a Freebase query. Consider the following sentence where the entity Nest has been removed: Google acquired which was founded in Palo Alto To correctly \ufb01ll in the blank, one has to query Freebase for the entities acquired by Google (constraint 1) and founded in Palo Alto (constraint 2). If either of those constraints are not applied, there will be many entities as answers. For each question, we execute a single Freebase query containing all the constraints and retrieve a list of answer entities. From this list, we pick the \ufb01rst entity as our predicted answer, and consider the prediction as correct if the gold answer is the same as the predicted answer. 4 Sentences to Freebase Logical Forms CCG provides a clean interface between syntax and semantics, i.e. each argument of a words syntactic category corresponds to an argument of the lambda expression that de\ufb01nes its semantic interpretation (e.g., the lambda expression corresponding to the category (S\\NP)/NP of the verb acquired is \u03bbf.\u03bbg.\u03bbe.\u2203x.\u2203y.acquired(e) \u2227f(x) \u2227g(y) \u2227 arg1(e, y) \u2227arg2(e, x)), and the logical form for the complete sentence can be constructed by composing word level lambda expressions following the syntactic derivation (Bos et al., 2004). In Figure 2 we show \fGoogle acquired \u27e8blank\u27e9 which was founded in PA NP (S\\NP)/NP NP (NP\\NP)/(S\\NP) S\\NP > NP\\NP < NP > S\\NP < S e2 Palo Alto target x e1 Google acquired. arg2 acquired. arg1 founded. in.arg1 founded. in.arg2 target(x) ^ founded.in.arg1(e2, x) ^ founded.in.arg2(e2, Palo Alto) ^ acquired.arg1(e1, Google) ^ acquired.arg2(e1, x) \u03bbe1.\u2203xe2. TARGET(x) \u2227acquired(e1) \u2227arg1(e1, Google) \u2227arg2(e1, x) \u2227founded(e2) \u2227arg2(e2, x) \u2227in(e2, PaloAlto) Google acquired \u27e8blank\u27e9 which was founded in PA NP (S\\NP)/NP NP ((S\\NP)\\(S\\NP))/(S\\NP) S\\NP > > S\\NP (S\\NP)\\S\\NP < S\\NP < S Palo Alto e2 target x e1 Google acquired. arg2 acquired. arg1 founded. in.arg1 founded. in.arg2 \u03bbe1.\u2203xe2. TARGET(x) \u2227acquired(e1) \u2227arg1(e1, Google) \u2227arg2(e1, x) \u2227founded(e2) \u2227arg2(e2, Google) \u2227in(e2, PaloAlto) Figure 2: The lexical categories for which determine the relative clause attachment and therefore the resulting ungrounded logical form. The top derivation correctly executes a query to retrieve companies founded in Palo Alto and acquired by Google. The bottom incorrectly asserts that Google was founded in Palo Alto. two syntactic derivations for the same sentence, and the corresponding logical forms and equivalent graph representations derived by GRAPHPARSER (Reddy et al., 2014). The graph representations are possible because GRAPHPARSER assumes access to coindexations of input CCG categories. We provide co-indexation for all induced categories, including multiple co-indexations when an induced category is ambiguous. For example, (S\\N)/(S\\N) refers to either (Sx\\Ny)/(Sx\\Ny) indicating an auxiliary verb or (Sx\\Ny)/(Sz\\Ny) indicating a control verb. Initially, the predicates in the expression/graph will be based entirely on the surface form of the words in the sentence. This is the \u201cungrounded\u201d semantic representation. Our next step is to convert these ungrounded graphs to Freebase graphs.2 Like Reddy et al. (2014), we treat this problem as a graph matching problem. Using GRAPHPARSER we retrieve all the Freebase graphs that are isomorphic to the ungrounded graph, and select only the graphs that could correctly predict the blank slot, as candidate graphs. Using these candidate graphs, we train a structured perceptron that learns to rank grounded graphs for a given ungrounded graph.3 We use ungrounded predicate and Freebase predicate alignments as our features. 2Note that there is one-to-one correspondence between Freebase graphs and Freebase logical forms. 3Please see Section 4.3 of Reddy et al. (2016) for details. Sentences Tokens Types Entities Train 79,247 685,922 69,095 37,606 Dev 4,763 41,102 9,306 4,358 Test 9,309 80,437 15,180 7,431 Table 1: SPADES Corpus Statistics 5 Experiments 5.1 Training and Evaluation Datasets Our dataset SPADES (Semantic PArsing of DEclarative Sentences) is constructed from the declarative sentences collected by Reddy et al. (2014) from CLUEWEB09 (Gabrilovich et al., 2013) based on the following constraints: 1) There exists at least one isomorphic Freebase graph to the ungrounded representation of the input sentence; 2) There are no variable nodes in the ungrounded graph (e.g., Google acquired a company is discarded whereas Google acquired the company Nest is selected). We split this data into training (85%), development (5%) and testing (10%) sentences (Table 1). We introduce empty slots into these sentences by randomly removing an entity. SPADES can be downloaded at http:// github.com/sivareddyg/graph-parser. There has been other recent interest in similar datasets for sentence completion (Zweig et al., 2012) and machine reading (Hermann et al., 2015), but unlike other corpora our data is tied directly to Freebase and requires the execution of a semantic parse to correctly predict the missing entity. This is made more \fCCGbank (Syntax) Slot Filling (Semantics) LF1 UF1 2 3 4 Overall Sentences \u223c6K \u223c3K \u223c600 \u223c10K Bag-of-Words \u2013 \u2013 34.1 28.1 16.5 31.4 Syntax UNSUPERVISED 37.1 64.2 27.7 20.0 16.8 24.8 SEMI-SUPERVISED-POS 53.0 68.5 30.5 21.8 18.8 27.3 SEMI-SUPERVISED-WORD 53.5 68.9 30.7 25.5 18.7 28.4 SUPERVISED 84.2 91.0 32.7 29.0 20.2 30.9 Table 2: Syntactic and semantic evaluation of the parsing models. Left: Simpli\ufb01ed labeled F1 and undirected unlabeled F1 on CCGbank, Section 23. Right: Slot \ufb01lling performance (by number of entities per sentence). Slot-\ufb01lling results are updated after the camera-ready submission. In the previous version, instead of evaluating if the gold entity is same as the \ufb01rst predicted entity, we mistakenly evaluated if the gold entity is present in the list of predicted answer entities. However, the initial claims are still valid. All other results and discussion are revised. explicit by the fact that one third of the entities in our test set are never seen during training, so without a general approach to query creation and execution there is a limit on a system\u2019s performance. 5.2 Our Models We use different CCG parsers varying in the amounts of supervision. For the UNSUPERVISED scenario, we use Bisk and Hockenmaier (2015)\u2019s parser which exploits a small set of universal rules to automatically induce and weight a large set of lexical categories. For the semi-supervised, we explore two options \u2013 SEMI-SUPERVISED-WORD and SEMI-SUPERVISEDPOS. We use Bisk et al. in both settings but we constrain its lexicon manually rather than inducing it from scratch. In the former, we restrict the top 200 words in English to occur only with the CCG categories that comprise 95% of the occurrences of a word\u2019s use in Section 22 of WSJ/CCGbank. In the latter, we restrict the POS tags instead of words. For the SUPERVISED scenario, we use EasyCCG (Lewis and Steedman, 2014) trained on CCGbank. Finally, in order to further demonstrate the amount of useful information being learned by our parsers, we present a competitive Bag-of-Words baseline, which is a perceptron classi\ufb01er that performs \u201csemantic parsing\u201d by predicting either a Freebase or a null relation between the empty slot and every other entity in the sentence, using the words in the sentence as features. This naive approach is competitive on simple sentences with only two entities, rivaling even the fully supervised parser, but falters as complexity increases. 5.3 Results and Discussion Our primary focus is a comparison of intrinsic syntactic evaluation with our extrinsic semantic evaluation. To highlight the differences we present Section 23 parsing performance for our four models in Table 2. Dependency performance is evaluated on both the simpli\ufb01ed labeled F1 of Bisk and Hockenmaier (2015) and Undirected Unlabeled F1. Despite the supervised parser performing almost twice as well as the semi-supervised parsers on CCGbank LF1 (53.5 vs 84.2), in our semantic evaluation we see a comparatively small gain in performance (28.4 vs 30.9). It is interesting that such weakly supervised models are able to achieve over 90% of the performance of a fully supervised parser. To explore this further, we break down the semantics performance of all our models by the number of entities in a sentence. Each sentence has two, three, or four entities, one of which will be dropped for prediction. The more entities there are in a sentence, the more likely the models are to misanalyze a relation leading to their making the wrong prediction. These results are presented on the right side of Table 2. There are still notable discrepancies in performance, which we analyze more closely in the next section. Another interesting result is the drop in performance by the Bag-of-Words Model. As the number of entities in the sentence increase, the model weak\f0 37.1 25.72 100 48.49 27.56 200 53.5 30.05 500 53.36 29.18 1000 49.87 27.92 20 28 36 44 52 60 0 100 200 500 1000 Syntax Semantics !1 Figure 3: The effect of increasing number of lexical types in SEMI-SUPERVISED-WORD on syntax and semantics. When the lexicon grows past 200 lexical types both syntax and semantics begins to degrade. We also observe there is a correlation between syntactic parsing and semantic parsing performance. ens, performing worse than the unsupervised parser on sentences with four entities. It becomes non-trivial for it to isolate which entities and relations should be used for prediction. This seems to indicate that the unsupervised grammar is capturing more useful syntactic/semantic information than what is available from the words alone. Ensemble systems that incorporate syntax and a Bag-of-Words baseline may yield even better performance. 5.4 The Bene\ufb01ts of Annotation The performance of SEMI-SUPERVISED-POS and SEMI-SUPERVISED-WORD suggests that when resources are scarce, it is bene\ufb01cial to create a even a small lexicon of CCG categories. We analyze this further in Figure 3. Here we show how performance changes as a function of the number of labeled lexical types. Our values range from 0 to 1000 lexical types. We see syntactic improvements of 16pts and semantic gains of 4.33pts (16.8%) with 200 words, before performance degrades. It is possible that increasing annotation may only bene\ufb01t fully supervised models. Finally, when computing the most frequent lexical types we excluded commas. We found a drop in performance when restricting commas to the category , (they are commonly conj in our data). Additional in-domain knowledge might further improve performance. Error Example Prevalent Incorrect conjunction Stockholm, Sweden Appositive , a chemist , Introductory clauses In Frankfurt, ... Reduced relatives ... , established in 1909, ... B&H 15 Verb chains is also headquartered Possessive Anderson \u2019s Foundation PP Attachment of the foundation in Vancouver Table 3: Causes of semantic grounding errors with examples not previously isolated via intrinsic evaluation. 5.5 Common Errors Bisk and Hockenmaier (2015) performed an in-depth analysis of the types of categories learned and correctly used by their models (the same models as this paper). Their analysis was based on syntactic evaluation against CCGbank. In particular, they found the most egregious \u201csemantic\u201d errors to be the misuse of verb chains, possessives and PP attachment (bottom of Table 3). Since we now have access to a purely semantic evaluation, we can therefore ask whether these errors exist here, and how common they are. We do this analysis in two steps. First, we manually analyzed parses for which the unsupervised model failed to predict the correct semantics, but where the supervised parser succeeded. The top of Table 3 presents several of the most common reasons for failure. These mistakes were more mundane (e.g. incorrect use of a conjunction) than failures to use complex CCG categories or analyze attachments. Second, we can compare grammatical decisions made by the semi-supervised and unsupervised parsers against EasyCCG on sentences they successfully grounded. Bisk and Hockenmaier (2015) found that their unsupervised parser made mistakes on many very simple categories. We found the same result. When evaluating our parsers against the treebank we found the unsupervised model only correctly predicted transitive verbs 20% of the time and adverbs 39% of the time. In contrast, on our data, we produced the correct transitive category (according to EasyCCG) 65% of the time, and the correct adverb 68% of the time. These correct parsing decisions also lead to improved performance across many other categories (e.g. prepositions). This is likely due to our \fcorpus containing simpler constructions. In contrast, auxiliary verbs, relative clauses, and commas still proved dif\ufb01cult or harder than in the treebank. This implies that future work should tailor the annotation effort to their speci\ufb01c domain rather than relying on guidance solely from the treebank. 6 Conclusion Our goal in this paper was to present the \ufb01rst semantic evaluation of induced grammars in order to better understand their utility and strengths. We showed that induced grammars are learning more semantically useful structure than a Bag-of-Words model. Furthermore, we showed how minimal syntactic supervision can provide substantial gains in semantic evaluation. Our ongoing work explores creating a syntax-semantics loop where each bene\ufb01ts the other with no human (annotation) in the loop. Acknowledgments This paper is partly based on work that was done when the \ufb01rst and second authors were interns at Google, and on work that that was supported by NSF grant 1053856 to JH, and a Google PhD Fellowship to SR."
+ }
+ ],
+ "Igor Shalyminov": [
+ {
+ "url": "http://arxiv.org/abs/2012.02929v1",
+ "title": "Data-Efficient Methods for Dialogue Systems",
+ "abstract": "Conversational User Interface (CUI) has become ubiquitous in everyday life,\nin consumer-focused products like Siri and Alexa or business-oriented\nsolutions. Deep learning underlies many recent breakthroughs in dialogue\nsystems but requires very large amounts of training data, often annotated by\nexperts. Trained with smaller data, these methods end up severely lacking\nrobustness (e.g. to disfluencies and out-of-domain input), and often just have\ntoo little generalisation power. In this thesis, we address the above issues by\nintroducing a series of methods for training robust dialogue systems from\nminimal data. Firstly, we study two orthogonal approaches to dialogue:\nlinguistically informed and machine learning-based - from the data efficiency\nperspective. We outline the steps to obtain data-efficient solutions with\neither approach. We then introduce two data-efficient models for dialogue\nresponse generation: the Dialogue Knowledge Transfer Network based on latent\nvariable dialogue representations, and the hybrid Generative-Retrieval\nTransformer model (ranked first at the DSTC 8 Fast Domain Adaptation task).\nNext, we address the problem of robustness given minimal data. As such, propose\na multitask LSTM-based model for domain-general disfluency detection. For the\nproblem of out-of-domain input, we present Turn Dropout, a data augmentation\ntechnique for anomaly detection only using in-domain data, and introduce\nautoencoder-augmented models for efficient training with Turn Dropout. Finally,\nwe focus on social dialogue and introduce a neural model for response ranking\nin social conversation used in Alana, the 3rd place winner in the Amazon Alexa\nPrize 2017 and 2018. We employ a novel technique of predicting the dialogue\nlength as the main ranking objective and show that this approach improves upon\nthe ratings-based counterpart in terms of data efficiency while matching it in\nperformance.",
+ "authors": "Igor Shalyminov",
+ "published": "2020-12-05",
+ "updated": "2020-12-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "I.2.7"
+ ],
+ "main_content": "Introduction 1 1.1 Dialogue Systems (CUIs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 The Need for Data-E\ufb03cient Dialogue Models . . . . . . . . . . . . . . . . . . 4 1.3 Problems Addressed in the Thesis . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 Contributions and Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Background and Motivation 9 2.1 Conventional Dialogue System Architecture . . . . . . . . . . . . . . . . . . . 10 2.1.1 Natural Language Understanding (NLU) . . . . . . . . . . . . . . . . 11 2.1.2 Dialogue State Tracking (DST) . . . . . . . . . . . . . . . . . . . . . 12 2.1.3 Dialogue Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.4 Natural Language Generation (NLG) . . . . . . . . . . . . . . . . . . 15 2.2 Fully Data-Driven Architectures . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.1 Response Retrieval Models . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2.2 Response Generation Models . . . . . . . . . . . . . . . . . . . . . . 23 2.3 Key Techniques for Dialogue Response Generation Models . . . . . . . . . . . 24 2.3.1 Hierarchical Response Generation Models . . . . . . . . . . . . . . . . 25 2.3.2 Representation Learning with Autoencoders . . . . . . . . . . . . . . . 26 2.3.3 Latent Variable Models for Dialogue Response Generation . . . . . . . 29 2.3.4 Attention Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.3.5 Copy-Augmented Decoding . . . . . . . . . . . . . . . . . . . . . . . 34 2.3.6 Self-Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4 Transfer Learning in NLP and Dialogue Modelling . . . . . . . . . . . . . . . 39 v \fContents vi 2.4.1 Word Embedding Models . . . . . . . . . . . . . . . . . . . . . . . . 39 2.4.2 Contextual Word Embeddings . . . . . . . . . . . . . . . . . . . . . . 41 2.4.3 Transfer Learning for Dialogue . . . . . . . . . . . . . . . . . . . . . 42 2.4.4 Dialogue Transfer Learning Intuition: Lexical and Interactional Dialogue Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.5 Linguistically Informed Models of Dialogue . . . . . . . . . . . . . . . . . . . 44 2.5.1 Dynamic Syntax and Type Theory with Records (DS-TTR) . . . . . . . 45 2.5.2 The BABBLE Dialogue Model . . . . . . . . . . . . . . . . . . . . . . 46 2.5.3 Semantic User Simulation for BABBLE . . . . . . . . . . . . . . . . . 48 2.6 Generalisation Power and Robustness of Dialogue Models . . . . . . . . . . . 49 2.6.1 Spoken Dis\ufb02uencies and Data E\ufb03ciency . . . . . . . . . . . . . . . . 50 2.6.1.1 Incremental Dis\ufb02uency Detection Models . . . . . . . . . . 51 2.6.2 Out-of-Domain Robustness and Data E\ufb03ciency . . . . . . . . . . . . . 52 2.7 Dialogue Datasets and Data Collection . . . . . . . . . . . . . . . . . . . . . . 53 3 Linguistic Knowledge or Learning from Examples: A Data E\ufb03ciency Perspective 56 3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.2 Experimental Framework \u2014 bAbI Dialog Tasks . . . . . . . . . . . . . . . . . 58 3.3 The bAbI+ Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.4 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.4.1 MemN2N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.4.2 DyLan: bAbI and bAbI+ Setup Details . . . . . . . . . . . . . . . . . 62 3.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.5.1 Experiment 1: Generalisation from Small Data . . . . . . . . . . . . . 63 3.5.1.1 Results: Predicting System Turns . . . . . . . . . . . . . . . 64 3.5.2 Experiment 2: Semantic Accuracy . . . . . . . . . . . . . . . . . . . . 64 3.5.2.1 Results: Prediction of API Calls . . . . . . . . . . . . . . . 64 DyLan results. . . . . . . . . . . . . . . . . . . . . . . . . . . 64 MemN2N results \u2014 small data setup. . . . . . . . . . . . . . . 64 MemN2N results \u2014 full data setup. . . . . . . . . . . . . . . . 65 3.5.2.2 How Much Data Is Enough Data? . . . . . . . . . . . . . . . 65 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.6.1 MemN2N Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.6.2 DyLan Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4 Learning Transferable Dialogue Representations 70 4.1 Few-Shot Dialogue Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.2 The Base Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.3 Dialogue Knowledge Transfer Networks . . . . . . . . . . . . . . . . . . . . . 72 4.3.1 Stage 1. Dialogue Representation Pre-training . . . . . . . . . . . . . . 72 4.3.2 Stage 2. Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.4 Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.5 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.6 Experimental Setup and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 77 4.7 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.7.1 Results for the Few-Shot Setup . . . . . . . . . . . . . . . . . . . . . . 78 \fContents vii 4.7.2 Discussion of the Latent Representations . . . . . . . . . . . . . . . . 79 4.7.3 Results with Extended Data . . . . . . . . . . . . . . . . . . . . . . . 80 4.7.4 Discussion of the Evaluation Metrics . . . . . . . . . . . . . . . . . . 81 4.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5 Dialogue Domain Adaptation 83 5.1 Fast Domain Adaptation of a Dialogue System . . . . . . . . . . . . . . . . . 83 5.1.1 DSTC-8, Fast Domain Adaptation Task . . . . . . . . . . . . . . . . . 84 5.2 Proposed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.2.1 Fine-tuning on Target Domains and Prediction . . . . . . . . . . . . . 86 5.2.2 Hybrid Generative-Retrieval Prediction . . . . . . . . . . . . . . . . . 87 5.3 Baselines and Competing Models . . . . . . . . . . . . . . . . . . . . . . . . 88 5.4 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.5 Experimental Setup and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 89 5.5.1 Human Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.5.2 Automatic Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.6 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.6.1 Human Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.6.2 Automatic Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.6.3 Analysis of The Generated/Retrieved Responses . . . . . . . . . . . . 93 5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6 Spoken Dis\ufb02uency Detection 96 6.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.2 A Multitask LSTM-based Model for Spoken Dis\ufb02uency Detection . . . . . . . 97 6.3 The Switchboard Dialog Acts Dataset . . . . . . . . . . . . . . . . . . . . . . 99 6.4 Dis\ufb02uency Detection Generalisation to bAbI+ . . . . . . . . . . . . . . . . . . 100 6.5 Evaluation and Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . 100 6.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 6.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7 Improving Out-of-Domain Robustness of Dialogue Systems 105 7.1 The Experimental Environment for OOD Robustness Studies . . . . . . . . . . 106 7.2 OOD-Robust Dialogue Models . . . . . . . . . . . . . . . . . . . . . . . . . . 107 7.2.1 Hybrid Code Network Model Family . . . . . . . . . . . . . . . . . . 107 7.2.2 AE-HCN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 7.2.3 AE-HCN-CNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 7.3 Training with Turn Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 7.3.1 TD-HCN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 7.3.2 Training the AE-HCN(-CNN) . . . . . . . . . . . . . . . . . . . . . . 112 7.4 Experiment 1: HHCN & VHCN . . . . . . . . . . . . . . . . . . . . . . . . . 113 7.4.1 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 114 7.5 Experiment 2: AE-HCN(-CNN) . . . . . . . . . . . . . . . . . . . . . . . . . 115 7.5.1 Experimental Setup and Evaluation . . . . . . . . . . . . . . . . . . . 115 7.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 8 Data-E\ufb03ciency in Social Dialogue 119 8.1 Data E\ufb03ciency in Open-Domain Dialogue . . . . . . . . . . . . . . . . . . . . 120 \fContents viii 8.1.1 The Need for Data E\ufb03ciency . . . . . . . . . . . . . . . . . . . . . . 120 8.1.2 Users\u2019 Ratings and Explicit Feedback . . . . . . . . . . . . . . . . . . 121 8.2 A Neural Ranker for Open-Domain Conversation . . . . . . . . . . . . . . . . 123 8.2.1 Ranker Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 8.2.2 Training Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 8.3 Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 8.3.1 Handcrafted Ranker . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 8.3.2 Linear Ranker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 8.3.3 Dual-Encoder Ranker . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 8.4 Training Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 8.5 Evaluation and Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . 127 8.5.1 Evaluation Based on Explicit User Feedback . . . . . . . . . . . . . . 127 8.5.2 Interim Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 8.6 Training on Larger Amounts of Data . . . . . . . . . . . . . . . . . . . . . . . 129 8.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 8.8 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 8.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 9 Conclusions and Future Work 132 9.1 Directions for Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 A Dialogue Knowledge Transfer Networks \u2014 Supplementary Material 135 B Generative-Retrieval Transformers \u2014 Supplementary Material 139 C Dis\ufb02uency Detection \u2014 Supplementary Material 147 D Data-E\ufb03ciency in Social Dialogue \u2014 Supplementary Material 148 Bibliography 150 \fList of Figures 1.1 Enterprise dialogue ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.1 Conventional dialogue system architecture . . . . . . . . . . . . . . . . . . . . 10 2.2 Dialogue as a Markov Decision Process . . . . . . . . . . . . . . . . . . . . . 13 2.3 Hybrid Code Network architecture . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4 RNN-based Natural Language Generation . . . . . . . . . . . . . . . . . . . . 17 2.5 The DSSM model architecture for web search . . . . . . . . . . . . . . . . . . 19 2.6 MemN2N model architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.7 Sequence-to-sequence conversation model architecture . . . . . . . . . . . . . 24 2.8 HRED model architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.9 RNN-based autoencoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.10 RNN-based VAE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.11 The DI-VAE/DI-VST models . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.12 Latent variable Seq2Seq model . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.13 The LAED architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.14 Visualisation of attention in machine translation . . . . . . . . . . . . . . . . . 32 2.15 Attention mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.16 Pointer-Generator network architecture . . . . . . . . . . . . . . . . . . . . . . 35 2.17 Pointer-Sentinel Mixture Model . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.18 Transformer encoder-decoder architecture . . . . . . . . . . . . . . . . . . . . 38 2.19 Word2Vec model architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.20 Example relations between Word2Vec and GloVe vectors . . . . . . . . . . . 40 2.21 BERT \u2018pretrain-\ufb01netune\u2019 architecture . . . . . . . . . . . . . . . . . . . . . . 41 2.22 Interactional variations in a shopping domain . . . . . . . . . . . . . . . . . . 44 2.23 Incremental parsing with DS-TTR/DyLan . . . . . . . . . . . . . . . . . . . . 45 2.24 Processing self-corrections and clari\ufb01cation requests with DS-TTR/DyLan . . 46 2.25 Semantics to MDP state encoding F with RT features . . . . . . . . . . . . . . 48 2.26 ParlAI web interface for Wizard-of-Oz data collection . . . . . . . . . . . . . 54 3.1 bAbI Dialog Tasks structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.2 Few-shot performance of DyLan and MemN2N . . . . . . . . . . . . . . . . . 63 4.1 DI-VAE and DI-VST (DiKTNet Stage 1) . . . . . . . . . . . . . . . . . . . . 73 4.2 DiKTNet Stage 2 (tokens in brackets are KB data) . . . . . . . . . . . . . . . 75 4.3 Models\u2019 performance on the SMD dataset . . . . . . . . . . . . . . . . . . . . 78 4.4 DiKTNet performance with extended amounts of target data used for training . 80 5.1 Example DSTC-8 support set and target dialogue in the travel domain . . . . . 84 5.2 GRTr model diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 ix \fList of Figures x 5.3 DSTC-8 Fast Domain Adaptation \u2014 human evaluation . . . . . . . . . . . . . 90 5.4 Automatic evaluation on MetaLWOz . . . . . . . . . . . . . . . . . . . . . . 91 5.5 Automatic evaluation on MultiWOZ . . . . . . . . . . . . . . . . . . . . . . 92 5.6 GRTr generated/retrieved response ratios . . . . . . . . . . . . . . . . . . . . 93 6.1 Multitask LSTM dis\ufb02uency detector architecture . . . . . . . . . . . . . . . . 98 6.2 Statistics of the SWDA corpus . . . . . . . . . . . . . . . . . . . . . . . . . . 99 7.1 Hybrid Code Network model family . . . . . . . . . . . . . . . . . . . . . . . 107 7.2 AE-HCN model architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 7.3 Histograms of AE reconstruction scores for the bAbI6 test data . . . . . . . . . 117 8.1 Alana architecture, with an example chat . . . . . . . . . . . . . . . . . . . . . 122 8.2 Neural ranker architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 8.3 Comparison of rankers trained on extended datasets . . . . . . . . . . . . . . . 130 B.1 MetaLWOz pure task: a histogram of pairwise distances between generated and retrieved GRTr candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 B.2 MetaLWOz cross-task: a histogram of pairwise distances between generated and retrieved GRTr candidates . . . . . . . . . . . . . . . . . . . . . . . . . . 146 B.3 MultiWOZ: a histogram of pairwise distances between generated and retrieved GRTr candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 \fList of Tables 3.1 API call accuracy (%) of the MemN2N trained on the full dataset . . . . . . . . 65 3.2 MemN2N API call accuracy (%) with extended training data . . . . . . . . . . 66 4.1 Example dialogue from SMD . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.2 MetaLWOz dataset statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.3 Stanford multi-domain dataset statistics (trainset) . . . . . . . . . . . . . . . . 77 4.4 Selected clusters of utterances sharing the same DI-VAE codes . . . . . . . . . 79 4.5 DiKTNet\u2019s selected responses . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5.1 Example annotated utterances from MultiWOZ . . . . . . . . . . . . . . . . . 89 5.2 Ranking from judges\u2019 pairwise comparisons . . . . . . . . . . . . . . . . . . . 89 5.3 GRTr example responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.1 Evaluation of the dis\ufb02uency tagging models on SWDA . . . . . . . . . . . . . 101 6.2 Controlled generalisation evaluation on the 3 bAbI+ datasets . . . . . . . . . . 101 6.3 Most common repairs in SWDA of length 1\u20143 . . . . . . . . . . . . . . . . . 102 6.4 SWDA repairs by POS-tag pattern . . . . . . . . . . . . . . . . . . . . . . . . 102 6.5 SWDA repairs by interregnum . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7.1 Augmented dialogue example . . . . . . . . . . . . . . . . . . . . . . . . . . 106 7.2 Model hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 7.3 Evaluation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 7.4 bAbI6, GR, and GM dataset statistics . . . . . . . . . . . . . . . . . . . . . . 113 7.5 AE-HCN(-CNN) Evaluation results . . . . . . . . . . . . . . . . . . . . . . . 115 7.6 Performances of AE-HCN-Indep on bAbI6 Test-OOD with di\ufb00erent thresholds 116 7.7 AE-HCN-CNN performance on bAbI6 with varying counterfeit OOD rates . . 116 7.8 AE-HCN sensitivity to the \u03b2 hyperparameter on bAbI6 OOD-augmented testset 118 8.1 Correlation study of key dialogue aspects . . . . . . . . . . . . . . . . . . . . 121 8.2 Examples from the User feedback dataset for pairwise ranking . . . . . . . . . 128 8.3 Ranking models evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 A.1 DiKTNet evaluation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 A.2 MetaLWOz domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 B.1 Automatic evaluation results on MetaLWOz pure task . . . . . . . . . . . . . . 139 B.2 Automatic evaluation results on MetaLWOz cross-task . . . . . . . . . . . . . 139 B.3 MetaLWOz pure task: GRTr predictions with the closest generated/retrieved candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 xi \fList of Tables xii B.4 MetaLWOz pure task: GRTr predictions with the most distant generated/retrieved candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 B.5 MetaLWOz cross-task: GRTr predictions with the closest generated/retrieved candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 B.6 MetaLWOz cross-task: GRTr predictions with the most distant generated/retrieved candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 B.7 MultiWOZ: GRTr predictions with the closest generated/retrieved candidates . 144 B.8 MultiWOZ: GRTr predictions with the most distant generated/retrieved candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 C.1 Multi-task LSTM training setup . . . . . . . . . . . . . . . . . . . . . . . . . 147 C.2 SWDA labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 D.1 Neural rankers training setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 D.2 VowpalWabbit ranker training setup . . . . . . . . . . . . . . . . . . . . . . . 149 \fAbbreviations AE-HCN Autoencoder Hybrid Code Network AMT Amazon Mechanical Turk API Application Programming Interface ASR Automatic Speech Recognition BERT Bidirectional Encoder Representations from Transformers BLEU Bilingual Evaluation Understudy BPR Batch Prior Regularisation BPTT Backpropagation Through Time CIDEr Consensus-based Image Description Evaluation CNN Convolutional Neural Network CRF Conditional Random Field CUI Conversational User Interface CV Computer Vision DiKTNet Dialogue Knowledge Transfer Network DI-VAE Discrete Information Variational Autoencoder DI-VST Discrete Information Variational Skip-Thought DP Dynamic Programming DS Dynamic Syntax DSSM Deep Semantic Similarity Model DST Dialogue State Tracker DSTC Dialog State Tracking Challenge, Dialog System Technology Challenge ELBO Evidence Lower Bound ELMo Embeddings from Language Models FT FastText GAN Generative Adversarial Network xiii \fAbbreviations xiv GloVe Global Vectors for Word Representation GPT Generative Pretrained Transformer GRTr Generative-Retrieval Transformer GRU Gated Recurrent Unit HCN Hybrid Code Network HMM Hidden Markov Model HRED Hierarchical Recurrent Encoder-Decoder IDF Inverted Document Frequency IVA Intelligent Virtual Assistant KB Knowledge Base OOD Out-of-Domain VHCN Variational Hybrid Code Network LAED Latent Action Encoder-Decoder LDA Latent Dirichlet Allocation LSTM Long Short-Term Memory MDP Markov Decision Process MemN2N End-to-End Memory NEtwork MetaLWOz Meta-Learning Wizard-of-Oz METEOR Metric for Evaluation of Translation with Explicit Ordering MLE Maximum Likelihood Estimate MLP Multilayer Perceptron MLM Masked Language Modelling MRC Machine Reading Comprehension MSE Mean Squared Error NER Named Entity Recognition NLG Natural Language Generation NLL Negative Log-Likelihood NLP Natural Language Processing NLTK Natural Language Toolkit NLU Natural Language Understanding NMT Neural Machine Translation NP Noun Phrase NSP Next Sentence Prediction \fAbbreviations xv POMDP Partially Observable Markov Decision Process POS Part of Speech PP Prepositional Phrase QA Question Answering ReLU Recti\ufb01ed Linear Unit RL Reinforcement Learning RNN Recurrent Neural Network RNNLM Recurrent Neural Network Language Model ROUGE Recall-Oriented Understudy for Gisting Evaluation RT Record Type Seq2Seq Sequence-to-Sequence SGD Stochastic Gradient Descent SMD Stanford Multi-Domain (dataset) SP SentencePiece SVM Support Vector Machine SWDA Switchboard Dialog Acts TD Temporal Di\ufb00erence TF Term Frequency TTR Type Theory With Records TTS Text-to-Speech ULMFiT Universal Language Model Fine-Tuning VAE Variational Autoencoder VP Verb Phrase WOZ Wizard-of-Oz ZSDG Zero-Shot Dialogue Generation \fThis page is intentionally left blank xvi \fChapter 1 Introduction According to the Merriam-Webster Dictionary (2020), dialogue is \u201ca conversation between two or more persons, also a similar exchange between a person and something else (such as a computer)\u201d. Conversation is the most versatile and e\ufb03cient way for humans to communicate: a debate, a negotiation, or a friendly chit-chat \u2014 dialogue is the main means for all kinds of interaction between people. But it is not only other people whom one would want to interact with naturally: communicating with machines in a similar intuitive way has been a major goal for researchers and engineers around the world. And having past decades of Human-ComputerInteraction (HCI) research and several generations of human-computer interfaces, we are now on the brink of being able to leverage the empowering potential of computers just by talking to them. Speci\ufb01cally, advances in the \ufb01elds of Speech Processing and Natural Language Processing (NLP) made it possible to enable interaction with an ever growing amount of devices and services using natural language \u2014 interfaces of this kind are called Conversational User Interfaces (CUIs), or dialogue systems. Currently, dialogue systems are already ubiquitous in everyday life: \u2014 CUI is now enabled by most of the personal devices, e.g. Apple Siri, Amazon Alexa, Google Assistant, Microsoft Cortana, with their functionality constantly growing, \u2014 Enterprises, especially in banking, healthcare, and retail spheres are deploying CUI solutions for automating call centres, customer support websites, and sales assistance at online marketplaces, \u2014 Due to popular demand, a wide range of CUI building platforms are now thriving on the market, e.g. Google Dialog\ufb02ow (former Api.ai), Facebook Wit.ai, Rasa, Microsoft Bot Framework, Baidu DuerOS, Amazon Lex, and Apple Siri platforms for developers. See Figure 1.1 for a visualisation of the variety of components and services in the ecosystem of dialogue solutions for enterprise. 1 \fChapter 1. Introduction 2 Figure 1.1: Enterprise dialogue ecosystem1 The market for Intelligent Virtual Assistants (IVAs) and related products was valued at $2.2 billion in 2018 and expected to grow 10-fold by 2025 thus reaching $25.63 billion2. 1.1 Dialogue Systems (CUIs) Although CUI gained wide adoption in the very recent years, conversational, or dialogue systems have a long history. The \ufb01rst computer programs to support Natural Language interaction appeared decades ago \u2014 the \ufb01rst one that gained wide public recognition was ELIZA developed in 1966 by Joseph Weizenbaum at MIT (Weizenbaum, 1966). Programmed with a set of simple rules, it was tasked to maintain dialogue with the user close to what can be expected at a psychotherapy session. The system did not have any capabilities of \u201cunderstanding\u201d the user\u2019s words to any signi\ufb01cant degree \u2014 nor was it its goal. However, in its behaviour it appeared su\ufb03ciently close to a human, and thus it was one of the \ufb01rst programs to attempt the Turing Test which evaluates machine\u2019s ability to simulate human behaviour so that it cannot be distinguished from a real human (Turing, 1950). Since then, a lot of research and engineering e\ufb00ort has been put into creating conversational systems of ever-increasing functionality \u2014 however early systems were constrained by the limitations of the rule-based logic which was never robust and \ufb02exible enough for natural language. This has changed with the spread of machine learning techniques in the NLP \ufb01eld. Based on statistical analysis of real datasets rather than handcrafted rules, machine learning greatly improved dialogue systems\u2019 robustness to the aspects of spoken language as well as enabled Automatic Speech Recognition (ASR) systems to attain a practical performance level. This led to the creation of the \ufb01rst systems with voice interfaces \u2014 e.g. ESPIRIT SUNDIAL (Peckham, 1991) 1Image source: seekpng.com 2Information from GrandView Research, accessed December 2019 \fChapter 1. Introduction 3 was one of the \ufb01rst spoken dialogue systems: it worked over the phone line and provided air travel information for the users. Nevertheless, its language coverage was not very wide, and apart from the state-of-the art ASR for that time, its dialogue behaviour was quite in\ufb02exible as it was based on a set of rules. However, around that time dialogue systems started gaining interest in the enterprise sphere due to their potential in optimising business processes. Having attained su\ufb03cient performance with speech recognition (i.e. converting an utterance from audio signal into its textual form), dialogue systems research was then focused on understanding the more high-level concepts contained in the utterance, i.e. extracting slots (types of entities in the context of the dialogue task, e.g. \u2018to\u2019, \u2018from\u2019 for the taxi booking task) with their values, as well as detecting the overall user\u2019s intent in the utterance. This problem is called Natural Language Understanding (NLU) which we are going to cover in Section 2.1.1. Machine learning approaches brought signi\ufb01cant advances to NLU, thus making conversational interfaces able to execute a wide range of voice commands in di\ufb00erent domains. But still, the NLU logic did not cover temporal structure of conversation and the presence of the dialogue context reducing those interfaces to 1-shot interactions for the most part (some basic support of dialogue was handled by handcrafted rules). The direction towards bringing full dialogue support to CUIs started with the introduction of Dialogue Manager (DM), a component in the dialogue system architecture (Allen et al., 2001) maintaining the conversational context in the form of a set of slots and their values provided by the user so far and taking the next dialogue action based on it (DM will be discussed in detail in Section 2.1.2). Furthermore, with the highly time-distributed nature of the conversation and extremely sparse feedback (success or failure of a dialogue can only be determined upon the end of the conversation), Supervised Learning (SL) techniques \u2014 i.e., those requiring reference output for each input to train the model \u2014 which were normally used for ASR or NLU could not be directly applied to dialogue management, and the attention of the research community shifted to Reinforcement Learning (RL) methods (Rieser and Lemon, 2011; Williams and Young, 2007) \u2014 those representing the model as and agent and training it from interactions with an environment, simulated or real-world one. However, due to the problems of scalability of early RL models and their optimisation techniques to the real-world problems, as well as the need for an excess amount of natural interactions for better training performance, Reinforcement Learning was mainly seen in academic research and proof-of-concept projects. With the latest revolution in machine learning \u2014 the availability of massive datasets and the computational power to process them \u2014 a technique called deep learning caused a major shift in dialogue systems research, as well as the rest of the NLP \ufb01eld. As such, neural networks with multiple layers of neurons (thus deep models) trained in a uni\ufb01ed way based on the backpropagation algorithm (Rumelhart et al., 1988) were able to learn complex patterns in raw data, their performance increasing with the amount of training examples. Speci\ufb01cally, one notable deep \fChapter 1. Introduction 4 learning-based breakthrough was the neural conversation model of Vinyals and Le (2015) \u2014 a dialogue system with a single underlying neural model trained from raw transcripts, without the use of any explicit complex features or domain knowledge. Existing approaches to dialogue system components originally powered with \u2018classic\u2019 SL and RL algorithms also bene\ufb01ted from incorporating deep learning techniques (Henderson et al., 2014c; Wen et al., 2015b; Cuay\u00e1huitl, 2016, Li et al., 2016b). The complex patterns in the data that deep learning models were able to learn \u2014 e.g. properties of words, sentences, and paragraphs \u2014 largely replaced the need for modular architectures, by e\ufb03ciently learning all the steps of conversation in a uni\ufb01ed, end-toend fashion (Serban et al., 2016). Data has therefore become key in dialogue systems development, and the means to collect highquality real dialogue examples came into focus (Miller et al., 2017). However, this extremely high data consumption makes deep learning models not \ufb02exible enough for the growth pace of the \ufb01eld \u2014 particularly limiting is the human e\ufb00ort in data collection and manual annotation which is both time-consuming and expensive. Moreover, these two steps have to be re-visited every time the product requirements get corrected or new functionality is desired. Therefore, it is now of key priority to develop methods for training robust and practical neural dialogue models with small amounts of data \u2014 e.g. a few example dialogues. This will keep dialogue systems being highly maintainable software products, and the corresponding development cycles rapid. 1.2 The Need for Data-E\ufb03cient Dialogue Models Recent advances in dialogue modelling resulted in a massive industrial adoption of deep learning techniques. Initially originated as open-ended academic research and trained/evaluated on static large datasets, those techniques have to undergo a substantial adaptation in order to \ufb01t the industrial demands. The \ufb01rst drastic di\ufb00erence between experimental testbeds and real-world software products is that the latter are much more dynamic and \ufb02exible. In order to stay up to date with new feature requests and tweaks for better user experience, any well-maintained product is continuously modi\ufb01ed throughout its lifecycle. Having the core components trained directly on massive datasets, while giving them a certain level of coverage and making them generalisable to some extent, leaves the resulting models static and in\ufb02exible to the variety of target domains and usage aspects of the end products \u2014 ultimately resulting in insu\ufb03cient maintainability of the latter. Apart from that, requires sensible timespans required by the large-scale training restrict the models\u2019 \ufb01tness for fast-paced development cycles. Lastly, the amount of annotations required by large-scale supervised training results in a signi\ufb01cant \ufb01nancial overhead of such development strategy as well. \fChapter 1. Introduction 5 The above concerns can be mitigated by adopting the data-e\ufb03cient approach to the development of neural models, i.e. enabling training from a limited amount of seed data. Data-e\ufb03cient training assumes a series of speci\ufb01c problems to be solved \u2014 for dialogue systems, those are as follows. Firstly, given that deep learning methods are greatly prone to over\ufb01tting, it is of key importance for the model to attain a su\ufb03cient generalisation level outside its seed dataset. Secondly, there is a major mismatch between the speci\ufb01cs of written language of openly available data used for large-scale training of NLP models (e.g. internet news articles, Wikipedia documents or books) and spoken language of dialogue. The mismatch spans from di\ufb00erences in vocabulary and syntax to various incremental phenomena of spontaneous speech like pauses, self-corrections, restarts and other dis\ufb02uencies (Hough, 2014; Healey et al., 2018). Under the end-to-end deep learning framework, the conventional solution to that would be to have all the relevant speech phenomena covered in the training data, but for low-resource training, this assumption does not hold. Finally, every dialogue system, especially the ones that provide a conversational interface to an underlying Application Programming Interface (API) or service, has the boundaries of its domain, and when working with a large user audience, it often happens that the system gets unexpected, or anomalous input outside its designated domain. Given that most such systems are trained for maximum accuracy within the domain, it is important to have a means to guarantee predictable system\u2019s behaviour outside it. And expanding the training dataset with out-of-domain data in order to train for coverage is hardly possible since all the variety of di\ufb00erent domains and anomalous cases is very challenging to list. It is especially critical for the systems trained from minimal data, where any additional data collection comes at an especially high cost. 1.3 Problems Addressed in the Thesis As outlined above, in this thesis we are going to address the following problems: 1. Dependence on large amounts of training data with annotations. We are going to develop methods enabling the training of goal-oriented and chat-oriented dialogue systems (as well as their key components) with minimum human e\ufb00ort in terms of training data collection and annotation; 2. The lack of coverage of the diverse spoken language phenomena. Conversational corpora collected speci\ufb01cally for dialogue model development do not normally represent the aspects of spontaneous spoken language, therefore the models trained on those are not ready for immediate usage in real-world settings. We will investigate ways to improve their generalisation potential to such input without the increase in the amount of the required training data; \fChapter 1. Introduction 6 3. Insu\ufb03cient robustness of neural dialogue models to out-of-domain input. As is the case with training from small data, neural dialogue models are prone to over\ufb01tting to the training datasets and thus often lack robustness to anomalous out-of-domain user input which leads to unexpected dialogue behaviour and thus considerably limits the usage of such models in mission-critical production environments. We are going to improve the dialogue systems\u2019 robustness to out-of-domain input \u2014 again, without the increase in the amount of training data, i.e. only using the available in-domain data. 1.4 Contributions and Thesis Structure The above problem statement de\ufb01nes the structure of this work: the thesis consists of a series of contributions towards practical data e\ufb03ciency of machine learning-based dialogue systems. The thesis is organised as follows. In Chapter 2 following the Introduction, we give a brief overview of dialogue system architectures \u2014 particularly, how the conventional modular architecture was transformed into the purely data-driven architecture under the in\ufb02uence of deep learning methods. We cover the two major types of purely data-driven architectures: response generation systems and response retrieval systems. We then outline the current state of transfer learning which is the basis for data-e\ufb03cient methods in machine learning overall. After that, we give a brief background on linguistically informed models of dialogue \u2014 speci\ufb01cally those based on the dialogue grammars. We then discuss the problems of robustness of dialogue systems \u2014 especially those trained from minimal data \u2014 to the unseen phenomena in the input, i.e. spoken dis\ufb02uencies and out-of-domain user queries. We conclude the chapter with a discussion of the dialogue datasets widely used in the \ufb01eld, as well as the aspects of dialogue data collection. In Chapter 3, we conduct a motivational study of two fundamentally di\ufb00erent approaches to dialogue: a linguistically informed model based on the semantic parser DyLan and a neural response retrieval model End-to-End Memory Network (MemN2N) \u2014 speci\ufb01cally, how they work in the setup of (1) natural input data containing spoken dis\ufb02uencies and (2) limited training data. For (1), we introduce bAbI+, an extension to Facebook AI Research\u2019s bAbI Dialog Task 1 dataset augmented with spoken dis\ufb02uencies. We outline the possible steps for obtaining the practical data-e\ufb03cient solutions with either type of models. The work in this chapter was published at EMNLP 2017 (Eshghi et al., 2017, the author\u2019s contribution is the implementation of the Memory Network model, the bAbI+ data augmentation technique together with the corresponding dataset, the semantic user simulator, and the design and execution of the experiments) and SemDial 2017 (Shalyminov et al., 2017). \fChapter 1. Introduction 7 In Chapter 4, we focus on the problem of bootstrapping a goal-oriented dialogue system from minimal data, i.e. in a few-shot setup. Using the intuition that goal-oriented dialogue has a uni\ufb01ed, domain-agnostic latent structure (e.g. sequences of dialogue acts which are sequences of characteristic words and phrases), we introduce the Dialogue Knowledge Transfer Network (DiKTNet), a model that represents that in the form of latent Variational Autoencoder codes learned from a large multi-domain source of dialogues, MetaLWOz. The model outperforms the previous best approach on its target Stanford Multi-Domain (SMD) dataset. The work in this chapter was published at EMNLP 2019 (Shalyminov et al., 2019a) and SigDial 2019 (Shalyminov et al., 2019b). In Chapter 5, we continue with the problem of training dialogue systems from minimal data as part of the Dialog System Technology Challenge (DSTC) 8 Fast Domain Adaptation task. We propose the hybrid Generative-Retrieval Transformer (GRTr). The model maintains a high diversity level by using sampling-based decoding and alternates between generation and retrieval from the support sets, part of the DSTC-8 data split. GRTr is the winning entry at the DSTC-8 Fast Domain Adaptation task as evaluated with human judges. The work in this chapter was published at the DSTC8@AAAI 2020 (Shalyminov et al., 2020) and ICASSP 2020 (Shalyminov et al., 2020). In Chapter 6, we focus on the robustness of dialogue models to real-world variations of the user\u2019s input \u2014 speci\ufb01cally, spoken dis\ufb02uencies (e.g. pauses, restarts, self-corrections). We propose a multitask LSTM-based model for domain-general dis\ufb02uency detection improving upon the previous best model for incremental dis\ufb02uency detection on open-domain Switchboard Dialog Acts (SWDA) dataset and showing superior generalisation to bAbI+. The work was presented at SemDial 2018 (Shalyminov et al., 2018b). In Chapter 7, we go on to explore robustness to a di\ufb00erent kind of anomalous input \u2014 out-ofdomain (OOD) utterances. In the low-resource setting, it is cumbersome to collect real OOD data for robustness training because of the great variety of possible domains and tasks. We address this problem by (1) presenting turn dropout, a data-augmentation technique allowing training for anomalous input only using in-domain data, and (2) introducing AE-HCN, a Hybrid Code Network-based dialogue management model with an autoencoder for robust training using turn dropout. AE-HCN improves upon the standard HCN on bAbI Task 6 (DSTC-2) and the Google Multi-Domain Dialogues Dataset. The work from this chapter was presented at CAI@NIPS 2018 (Shalyminov and Lee, 2018) and ICASSP 2019 (Lee and Shalyminov, 2019, the author\u2019s contribution is the bAbI+ data augmentation procedure and the augmented bAbI Task 6 dataset, as well as the turn dropout training technique). In Chapter 8, we turn our attention to open-domain chat-oriented dialogue and its data-e\ufb03ciency aspects. Our problem here is response ranking in an open-domain ensemble-based dialogue system, which competed in the Amazon Alexa Prize. We introduce the neural response ranker \fChapter 1. Introduction 8 used in Alana, the 3rd place winner in the Amazon Alexa Prize 2017 and 2018. We explore two alternative supervision signals, dialogue rating and length, and show that the length-based model matches the performance of its rating-based counterpart and surpasses it given more unannotated training data, thus reducing the dependence on costly and cumbersome dialogue annotations. The work in this chapter was published at SCAI@EMNLP 2018 (Shalyminov et al., 2018a) and Amazon Alexa Prize Proceedings (Curry et al., 2018, the author\u2019s contribution is the neural ranking model and the data-e\ufb03cient response ranking study). Having introduced a series of data-e\ufb03cient methods for neural dialogue systems, in Chapter 9 we conclude the thesis with an outline of future work left to be done in order to facilitate the adoption of these techniques in large-scale CUIs. \fChapter 2 Background and Motivation Dialogue systems can be categorised by their purpose (goal-oriented and chat-oriented), modality of interaction (spoken, text-based, visual, multimodal) or the underlying model architecture (rule-based, data-driven), but on a high level they all have a similar structure. Over the years of research and development, this structure has undergone a series of transformations caused by the key breakthroughs a\ufb00ecting dialogue modelling. The most recent and in\ufb02uential of these transformations came with the adoption of machine learning techniques, especially deep learning. Following is a description of the dialogue system architecture and how it changed having become machine learning-centric. The structure of this chapter is as follows. Firstly, we are going to discuss the conventional dialogue system architecture (Section 2.1) and how it then transformed into the fully data-driven architectures (Section 2.2). Speci\ufb01cally, we will cover the two principal approaches that fully data-driven systems follow, i.e. response retrieval models (Section 2.2.1) and response generation models (Section 2.2.2), with a subsequent overview of the key techniques de\ufb01ning the most widely-used advances in dialogue response generation (Section 2.3). In Section 2.4, we will give a background on transfer learning, a general-purpose machine learning technique lying in the core of practical data e\ufb03ciency, and the corresponding models for dialogue and NLP tasks. Having brie\ufb02y covered the state of machine learning-based dialogue modelling, we then describe an alternative, linguistically informed approach to dialogue in Section 2.5. In Section 2.6, we will discuss the generalisation and robustness issues of the systems trained from minimal data. As such, we will cover robustness to dis\ufb02uencies in the spoken language \u2014 and how proper processing of those phenomena is necessary for dialogue understanding. We will then discuss the problem of robustness to out-of-domain (OOD) input, i.e. user queries that a closed-domain dialogue system is unable to process correctly \u2014 this is especially important in the settings where no real OOD data is available during training. We will conclude the chapter with Section 2.7, a brief overview of dialogue datasets and how data collection stage is integrated into the dialogue system pipelines and frameworks. 9 \fChapter 2. Background and Motivation 10 Figure 2.1: Conventional dialogue system architecture (Williams et al., 2016) 2.1 Conventional Dialogue System Architecture Maintaining \ufb02uent and coherent dialogue assumes having solved a series of underlying problems in speech recognition/synthesis, language understanding, and action planning. The dialogue system architecture that emerged historically re\ufb02ects that in its modular structure. It is normally referred to as the conventional architecture (Young, 2010; Williams et al., 2016) and is shown in Figure 2.1. The conventional architecture covers the widest range of dialogue systems on a high level, including spoken and text-based ones \u2014 as such, it contains 2 components speci\ufb01c to voice-based interaction, namely ASR decoding audio signals from the microphone into text, and Text-toSpeech (TTS), synthesising the sound from the system\u2019s textual response. Advances in these two systems are key to the recent wide spread of personal voice assistants like Apple Siri, Amazon Alexa, and Google Assistant \u2014 and the consequent development of a large-scale market for those systems. However, audio signal processing is out of this thesis\u2019s scope. In this work, we will be focusing on the \u2018core\u2019 dialogue system logic that works with the user\u2019s input in the textual form \u2014 either typed in directly or already decoded by the ASR, and produces a textual response as its \ufb01nal output, for displaying it on the screen or feeding it into the TTS. Throughout the description of the conventional architecture, we\u2019ll be mainly talking about goal-oriented dialogue which it corresponds to most. Later in Chapter 8, we\u2019ll turn to open-domain chat-oriented dialogue as part of discussing the more recent versions of the architecture. \fChapter 2. Background and Motivation 11 2.1.1 Natural Language Understanding (NLU) NLU (also referred to as Spoken Language Understanding, or SLU) is the \ufb01rst subsystem of the conventional pipeline whose function is the extraction of relevant information from the user\u2019s input and incorporating it into the system\u2019s internal state. NLU performs this extraction on the turn level, i.e. from a single user\u2019s utterance. Historically, dialogue follows the frame semantics convention for formally describing meaning (Dinarelli et al., 2009 provide a comprehensive up-to-date example). Under this notation, every situation is represented as an attribute-value frame (or form). In dialogue, a form describes a speci\ufb01c user\u2019s goal (also referred to as intent), e.g. booking a \ufb02ight or searching for a restaurant by the attributes (called slots). For the \ufb02ight booking task used in the diagram, the complete form contains from, to, depart-time, and confirmed slots. Therefore, goal-oriented dialogue can be represented as the form-\ufb01lling process, with NLU responsible for extracting information relevant to it from a single utterance. Named Entity Recognition (NER), usually considered as a general NLP task, can be performed by the NLU as well. NER addresses the extraction of a domain-agnostic set of entities such as persons, organisations, locations or timestamps \u2014 see for example the approach of Finkel et al. (2005). Finally, as dialogue systems can handle multiple user intents with the corresponding set of tasks (e.g. \u2018set alarm\u2019, \u2018put on music\u2019, \u2018search web\u2019, \u2018chit-chat\u2019), another type of processing performed at this stage is user intent detection. Intent detection is especially important since task-speci\ufb01c dialogue logic can be implemented as a completely independent subsystem \u2014 e.g. as of early 2020, Amazon Alexa contains more than 80,000 skills1 (dialogue \u2018applications\u2019 within the Alexa platform) which are implemented mostly by the independent developers. Slot value extraction is the problem that was traditionally approached using a linguistically informed method, i.e. semantic parsing relying on large-scale grammars built by linguists \u2014 e.g. the CMU Phoenix system consisted of about 13,000 rules (Tur, 2011; Ward, 1991). More recently though, the slot value extraction task shifted to the machine learning framework and was treated as a sequence labelling problem. Thus, \u2018classic\u2019 machine learning approaches like Hidden Markov Models (HMMs, Wang et al., 2005; Young, 2003), Conditional Random Fields (CRFs, La\ufb00erty et al., 2001; Sha and Pereira, 2003), and Support Vector Machines (SVMs, Mairesse et al., 2009) were used in NLU predominantly. User intent detection was also initially approached using heuristic methods like keyword detection or regular expression match, but later was treated as a classi\ufb01cation task, with the corresponding classi\ufb01cation models like those mentioned above or e\ufb03cient methods of ensembling them, e.g. boosting (Schapire and Singer, 2000). 1Information from Voicebot.ai \fChapter 2. Background and Motivation 12 Among the more recent approaches to NLU, E et al. (2019) proposed a neural model for joint slot-value extraction and intent detection. The part of the model predicting slot values is a combination of a Long Short-Term Memory cell (LSTM, Hochreiter and Schmidhuber, 1997) producing the latent states and a CRF on top making the actual predictions, which itself has been widely used recently for sequence tagging tasks. 2.1.2 Dialogue State Tracking (DST) The DST subsystem incorporates the information obtained from the NLU into the system\u2019s internal state (usually based on the form being \ufb01lled) which it maintains throughout the dialogue. In early approaches, DST was implemented as a \ufb01nite state machine maintaining the form with the new partial results coming from the NLU, and resolving any con\ufb02icts and updates caused by user\u2019s intention change or the overall ambiguity of the dialogue process. However, the actual degree of ambiguity in the real-world dialogue and \ufb02uidity of conversation with real users also made DST research shift to machine learning-based methods. As such, the annual Dialog State Tracking Challenge (DSTC) (Williams et al., 2016) was organised with the goal of advancing state-of-the-art in dialogue research. A number of approaches for DST emerged from DSTC: Recurrent Neural Networks (RNNs) for incremental word-by-word tracking (Henderson et al., 2014e; Zilka and Jurc\u00edcek, 2015), RNN-based domain-adaptive state tracking (Mrksic et al., 2015), and Convolutional Neural Networks (CNNs) for multi-language tracking (Shi et al., 2016). Since the 6th edition, with the expansion of the overall dialogue systems area, DSTC became known as Dialog System Technology Challenge, with the corresponding widening of its coverage to several tracks (e.g. \u2018Multi-Domain Task Completion Challenge\u2019, \u2018Noetic End-toEnd Response Selection\u2019, and \u2018Visual Scene-Aware Dialogue\u2019). As observed in DSTC-8 results, recent state-of-the-art approaches to DST tend to combine state tracking with the techniques from Machine Reading Comprehension, (MRC) (Ma et al., 2019) for better out-of-domain/zeroshot performance. 2.1.3 Dialogue Policy Policy is the key component in the conventional architecture: given the accumulated system state by the DST, it makes a decision of which action to take next. Actions can be interlocutory (e.g. con\ufb01rmation, information request, greeting) or functional, e.g. calls to the underlying APIs or databases. The dialogue policy is often combined with the state tracker into the Dialogue Manager (DM) component, and in the early approaches, DM was implemented as a \ufb01nite state machine whose logic was to scan through the dialogue form and enquire about the next un\ufb01lled slot or, when all the necessary information is collected, issue an API call or switch to the corresponding business logic. \fChapter 2. Background and Motivation 13 Figure 2.2: Dialogue as a Markov Decision Process (Rieser and Lemon, 2011) At later stages though, it became clear that this deterministic process is not adequate to the complexity of real-world conversations, and the following two assumptions were made: (1) the information in the dialogue state is not perfectly certain nor complete, and (2) the dialogue policy should optimise an objective function de\ufb01ning the overall dialogue success. Thus, a policy essentially was then considered as a planning under uncertainty problem. In goal-oriented dialogue, the planning objective is normally the number of turns taken to reach the user\u2019s goal, whereas in open-domain chat-oriented dialogue it could be formulated via the user\u2019s engagement during the conversation and the overall satisfaction at the end of it (Ram et al., 2017). In planning under uncertainty, the problems distributed in time and with a sparse delayed reward signal are modelled as Markov Decision Processes (MDPs, Rieser and Lemon, 2011; Singh et al., 2002) or Partially Observable MDPs (POMDPs, Young et al., 2013; Williams and Young, 2007), with RL as the optimisation framework. Naturally, an MDP is de\ufb01ned as a tuple < S, A, T , R > \u2014 the state space, the action space, the transition function between states, and the reward function. The dialogue system thus becomes an agent communicating with the environment (user) by making observations, performing actions and receiving the corresponding rewards. The form maintained by the DST becomes the agent\u2019s internal state, and the agent\u2019s goal is to go from the initial state (empty form) to the \ufb01nal state (form \ufb01lled out, API call issued, information presented to the user) via an optimal (or su\ufb03ciently close to such) series of actions de\ufb01ned by the learned policy \u2014 the procedure is visualised in Figure 2.2. In turn, the RL framework accounts for the time-distributed nature of the dialogue as well as sparse, delayed reward. Speci\ufb01cally, in RL we operate on the value of each state \u2014 that is, the expected cumulative (\ufb01nal) reward that the agent will get starting at a certain state s at the timestep t and following the optimal policy \u03c0: V\u03c0(s) = E\u03c0(R | st = s) (2.1) The values for each state are obtained via solving Bellman equations (Bellman and Kalaba, \fChapter 2. Background and Motivation 14 Figure 2.3: Hybrid Code Network architecture (Williams et al., 2017). The black parts are trainable components (with entity extraction being an independent NLU engine); trapezoids denote the parts of in-domain logic implemented by the engineers. 1957) in a Dynamic Programming (DP) way, or by using simulation-based methods, e.g. Temporal Di\ufb00erence (TD). TD-based methods such as Q-learning or SARSA are more widely used as they do not require knowing the dynamics of the model, i.e. all the state transition probabilities. Successful application of TD learning assumes having the source of interactions for the agent, e.g. a simulated environment or an embodiment for real-world interaction. In some cases, especially videogames (Mnih et al., 2013; Berner et al., 2019), it is feasible to obtain lots of real-world interactions to run a su\ufb03cient amount of training episodes. In dialogue however, such data may only come from the interactions with real users which requires an existing deployment of a prototype system. Therefore, a lot of research e\ufb00ort was made to develop e\ufb03cient user simulators for \u2018bootstrapping\u2019 the initial policy (Rieser and Lemon, 2011; Shi et al., 2019a), but in practice, training an e\ufb03cient simulator is as hard as training the \ufb01nal dialogue system. More recent approaches to Dialogue Management use supervised learning techniques such as Recurrent Neural Network-based (RNN) Hybrid Code Networks, or HCNs (Williams et al., 2017) and End-to-End Memory Networks, or MemN2Ns (Sukhbaatar et al., 2015; Bordes et al., 2017): trained with long enough context, these models approximate long-term optimal behaviour well enough for practical usability in relatively short conversations. The Hybrid Code Network is shown in Figure 2.3 \u2014 it is a recent example of a neural networkbased dialogue management model. The overall system architecture follows the modular approach as it has separate components for entity extraction (for which it used the LUIS system by Williams et al., 2015) and template-based Natural Language Generation (we\u2019ll cover this subsystem in the next section). HCN only focuses on state tracking and action selection, and it \fChapter 2. Background and Motivation 15 is aimed at training from minimal amounts of data for the direct use in products. To ensure the stability in real-world interactions in the setting of minimal training data, HCN introduces the concept of action masks which may be considered as expert rules incorporated into a machine learning-based model. Action masks, the binary masks applied at the DM\u2019s output, prohibit the system from issuing infeasible actions at the critical points in the dialogue, e.g. issuing an API call of bank transfer before con\ufb01rming the recipient account with the user. These actions have to be hand-crafted by the domain experts and incorporated into the training pipeline. Overall, this hybrid architecture allows to alternate the emphasis between the tight, handcrafted control over the system\u2019s behaviour via action masks and the \ufb02exibility of learning from examples. The authors presented a 2-stage approach to train an HCN: at the \ufb01rst stage, the model learns to mimic the training dialogue examples in a supervised way, and at the second stage, the system can be further \ufb01ne-tuned autonomously, from the interactions with the users in a reinforcement learning fashion. For that, the authors used a policy gradient approach (Williams, 1992) with the following gradient update: w \u2190w + \u03b1 \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed X t \u2207w log \u03c0(at | ht; w) \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8(G \u2212b) (2.2) In the above formula, the gradient is applied to the policy LSTM \u03c0 producing a distribution over actions a at the timestep t given the dialogue history ht. The error is the di\ufb00erence between G, the return of the dialogue (the expected discounted sum of rewards), and b, baseline average return set heuristically. This switch from supervised to reinforcement learning (referred to as continuous learning) of the same exact model proved to be the most widely-used way to incorporate reinforcement learning into the dialogue system training setups (Su et al., 2017; Su et al., 2016). HCN lies in the core of Microsoft Conversation Learner (Shukla et al., 2020), a tool for rapid prototyping of goal-oriented dialogue systems from example conversations which assumes datae\ufb03cient training. We are going to re-visit the model in Chapter 7 to explore the problem of handling out-of-domain user input unseen during training in the setting where no training examples like that are available. 2.1.4 Natural Language Generation (NLG) The last processing stage in the conventional, modular dialogue system architecture is the NLG. At this stage, with the system\u2019s action chosen by the policy, the surface form of the corresponding utterance is generated. As shown in Figure 2.1, the information coming from the policy is the dialogue act identi\ufb01er itself as well as some of its attributes. The reason this information is passed into NLG is that traditionally, response generation is considered a 2-stage process: \fChapter 2. Background and Motivation 16 \ufb01rstly, a delexicalised template of the \ufb01nal utterance is generated or selected (i.e., with all the case-speci\ufb01c slot values replaced by \ufb01ller tokens like ). After that, the template gets lexicalised back into its \ufb01nal surface form using the extra information from the policy. One of the most widely used notations for such frames is CUED Standard Dialogue Acts (Young, 2009). In early, template-based approaches to NLG, templates were stored explicitly, and the system would just pick a random one corresponding to a certain dialogue act (e.g. Rudnicky et al., 1999). Later on, NLG was considered as an optimisation problem (similarly to the components earlier in the pipeline discussed above), inspired by the advances in arti\ufb01cial intelligence and planning under uncertainty. NLG\u2019s optimisation objectives were those of the information presentation problem, i.e. maintaining the users\u2019 focus, speeding up information exchange, and improving the overall task success rate. For example, an adaptive NLG component can learn how to present a database lookup result with 1, 5, or 50 results \u2014 that is, whether to go one by one or limit the output to the top-3, or announce the number of results and just display the top result. As every dialogue assumes making a series of such information presentation decisions, with the feedback coming at the very end, it is intuitive to approach this problem within the RL framework. There has been a number of works proposing RL methods for NLG, e.g. Dethlefs and Cuay\u00e1huitl (2010), Rieser and Lemon (2009). They mainly focused on learning the 1st generation stage, i.e. response planning, while still heavily relying on rule-based surface realisation, thus being limited in the overall output\u2019s \ufb02exibility. Dusek and Jurcicek (2015) used a hybrid approach for planning: the A\u2217algorithm for the syntactic dependency tree construction, with perceptronbased pruning (their surface realisation stage for constructing sentence plans in the shape of syntactic dependency tree was also rule-based). The most important transformation to statistical NLG happened with the coming of deep learningbased methods: recurrent neural networks (RNN, LSTM) allowed to streamline the process by directly generating sentences word-by-word in a language model way, that is generating the next word given the context, e.g. RNNLM (Mikolov et al., 2010) \u2014 with the dialogue act information stored in the latent network state. A basic RNN-based NLG model is shown in Figure 2.4. The model generates output words by maintaining its hidden state h and updating it with every input token as follows (the equations below correspond to the particular model known as Elman RNN, Elman, 1990): ht = \u03c3h (Whxt + Uhht\u22121 + bh) (2.3) yt = \u03c3y \u0010 Wyht + by \u0011 (2.4) where xt is the input at time step t (in our case, an encoded token), yt is the model\u2019s corresponding output, ht\u22121 is the model\u2019s previous state, Wh, Wy, Uh, bh, by are the model\u2019s trainable \fChapter 2. Background and Motivation 17 Figure 2.4: RNN-based Natural Language Generation weights, and \u03c3h, \u03c3y are activation functions. Usually, sigmoid activation is used for the hidden state and softmax is used for the output. The initial state h0 of the model in the picture is the DM output (normally a dialogue act). This information can also be passed into the network at every step, e.g. as in Wen et al. (2015a). Note that in the picture, the model\u2019s input xt is its previous output yt\u22121. This is the setup such models operate in at inference time, however at training time, xt may as well be the ground truth tokens (an approach referred to as teacher forcing). In practice, basic RNN networks were quickly replaced by LSTMs (Hochreiter and Schmidhuber, 1997) which improve upon RNNs in training stability via using trainable input, output, and forget gates directly controlling what information to store in the model\u2019s state and what to explicitly forget. In addition, the approach of Wen et al. (2015b) combined two RNN cells, one being a regular LSTM for input tokens, and the other being a special \u2018lightweight\u2019 Dialogue Act cell. The Sequence-to-Sequence (Seq2Seq) neural architecture (Vinyals and Le, 2015; Sutskever et al., 2014; Cho et al., 2014b, to be discussed in detail later in Section 2.2.2) also allowed to both encode the dialogue act into a latent representation and then decode the output in a uni\ufb01ed token-by-token process. Dusek and Jurc\u00edcek (2016) used this approach to generate both syntax trees for the further surface realisation and the fully realised output sentences. The encoder-decoder Seq2Seq architecture represented the fully data-driven approach to NLG which assumes having a single streamlined training/prediction procedure with no intermediate stages and supervision. This approach was adopted with other dialogue system components, e.g. dialogue managers went through a similar transformation from having 2 separate subsystems, DST and Policy, to a uni\ufb01ed architecture like the HCN model discussed above. Eventually, the entire dialogue system pipeline became fully data-driven. In the next section, we are going to \fChapter 2. Background and Motivation 18 discuss the key types of dialogue system architectures that emerged as part of the fully datadriven (Gao et al., 2019), or corpus-based (Jurafsky and Martin, 2019) approach to dialogue modelling. 2.2 Fully Data-Driven Architectures \u2018Classic\u2019 machine learning methods transformed the dialogue systems \ufb01eld so that the most e\ufb03cient approaches shifted from analytical methods like expert rules, grammars, and ontologies towards data-driven techniques, e.g. CRFs for NLU or reinforcement learning over (PO)MDPs for dialogue management. This resulted in more \ufb02exible and adaptive systems, and also in a shift of focus in their development towards, \ufb01rstly, collecting data (corpora of hundreds of dialogues were normally used at that point) and secondly, tasking experts with feature engineering for the machine learning models instead of directly writing the rules for the system. This reliance on feature engineering instead of task-speci\ufb01c expert knowledge was the main factor contributing to the overall success and e\ufb03ciency of the machine learning approach. With the arrival of large, internet-scale datasets and computational power to train machine learning models large enough to make use of this data, the next generation of machine learning \u2014 deep learning \u2014 had started. Characterised by using multi-layer (or deep) model architectures and the uni\ufb01ed technique for training them (predominantly variants of Stochastic Gradient Descent, SGD , Bottou, 2010), deep learning approaches transformed the machine learning framework in the following fundamental way. With enough training data, deep neural networks were able to approximate non-linear relations between the input variables, thus learning features \u2014 often hierarchical \u2014 without the need for manual engineering (LeCun et al., 1998). This opened the possibility to train models for the target tasks directly from raw data, with the underlying latent features learned automatically and more e\ufb03ciently. One of the \ufb01rst and most notable examples of training a neural network from raw data was the approach of Mnih et al. (2013) featuring an agent for playing Atari games trained with Deep RL directly from the pixels of the game screen. This advance also had a massive impact on NLP: the bene\ufb01ts of training from more data with less annotations were experienced in machine translation, question answering, document summarisation. Dialogue systems research was one of those areas transformed under the in\ufb02uence of deep learning methods. As such, the models became less modular, with the main focus on collecting a large amount of conversations and training the the entire model (or its core part) on it. We are going to describe several kinds of such models below. \fChapter 2. Background and Motivation 19 Figure 2.5: The DSSM model architecture for web search (Huang et al., 2013). Semantic representation of indexed documents Di as well as the user\u2019s query Q is obtained with multiple trainable non-linear projection layers. The relevance scores are obtained via cosine similarity of query/document encoding pairs, then turned into a probability distribution over documents. 2.2.1 Response Retrieval Models Producing conversational utterances may be considered a response selection task, when a dialogue system works similarly to a search engine: indeed, searching in a collection of documents for those relevant to a user\u2019s query is analogous to searching for utterances given user\u2019s input (possibly together with the dialogue context). Dialogue models working in this way are called response retrieval models. They have a search database (or index) of responses or full conversations, and given a new dialogue context at the input, they retrieve from the index the best candidate given a certain optimality criterion. This criterion can be the similarity between the context and the response, e.g. TF-IDF or Okapi BM25 (Manning et al., 2008) \u2014 or some more advanced learned objective functions. Retrieval models are widely used in chat-oriented dialogue where the objective is to maintain the conversation and keep the user engaged and entertained. Naturally, chat-oriented systems can bene\ufb01t from conversational data openly available on Internet, e.g. discussion forums (Baumgartner et al., 2020), movie subtitles (Lison and Tiedemann, 2016) or post threads on social networks (Sordoni et al., 2015). One example of such architecture is the Deep Semantic Similarity Model (DSSM) (Huang et al., 2013) which was originally developed for document re-ranking in web search. The DSSM architecture is visualised in Figure 2.5. DSSM is a deep feed-forward neural architecture: the input, a bag-of-words term vector x (corresponding to a search query Q or to a document Di, i = 1, . . . , n in the search database) is fed through a series of non-linear layers l1, . . . , ln with the trainable weight & bias parameters W1, . . . , Wn and b1, . . . , bn, respectively \u2014 ultimately resulting in a deep semantic representation y of the input. Formally, this pipeline is of the following form: \fChapter 2. Background and Motivation 20 l1 = W1x li = f (WNli\u22121 + bi) , i = 2, ..., N \u22121 y = f (WNlN\u22121 + bN) (2.5) where f is the tanh activation function: tanh(x) = 1 \u2212exp(\u22122x) 1 + exp(\u22122x) (2.6) The semantic relevance of a document D given the query Q is then calculated as a cosine distance: R(Q, D) = cos(yQ, yD) = yQTyD \r \r \ryQ \r \r \r \r \r \ryD \r \r \r (2.7) The technique for calculating document relevance given a query in deep semantic representation was translated into the conversational response selection framework: as such, it was used in Microsoft Research\u2019s XiaoIce bot (Zhou et al., 2020). DSSM was used there as the response re-ranker \u2014 that is, having retrieved the initial set of response candidates over a large collection of conversations using simple and \u2018fast\u2019 relevance metrics like TF-IDF or Okapi BM25, those candidates were then re-ranked using the more \ufb01ne-grained (and computationally \u2018heavy\u2019) model. The same 2-stage retrieval process is used in search engines, with the re-ranker constantly pushed to work faster in order to handle a greater number of documents with real-time performance, eventually taking place of the main ranking stage (Zamani et al., 2018). Later on, with the development of neural architectures more suitable for textual data (predominantly LSTMs), new DSSM-based response selection models emerged. As such, QA-LSTM (Tan et al., 2015) initially introduced for the Question Answering (QA) task was later used as the response ranker in the personal chatbot Replika (Fedorenko et al., 2018) which gained massive adoption on the mobile application market. Another widely-used family of neural retrieval architectures is Memory Networks (MemNNs, Weston et al., 2015) which are based on the notion of explicit memory, with the network operating a set of \u2018cells\u2019 storing observations e.g. supporting facts for QA or context utterances for dialogue. Part of the architecture was also a di\ufb00erentiably trainable controller for reading/writing the memory. The end-to-end variant of memory networks, MemN2N (Sukhbaatar et al., 2015) was the successor of MemNNs for the QA task \u2014 but later on, it was adapted to dialogue response selection. Therefore, we will explain the intuition behind the model for both QA and dialogue. \fChapter 2. Background and Motivation 21 Figure 2.6: MemN2N model \u2014 single-hop (a) and multi-hop (b) architectures (Sukhbaatar et al., 2015) The main MemN2N architecture is shown in Figure 2.6 (a) \u2014 its main components are embedding matrices A, B, and C providing the di\ufb00erentiable representations for memories and the user\u2019s query and the output projection matrix W. Speci\ufb01cally, input sentences (context in case of dialogue or facts in case of QA) are represented in the form of memories using the embedding matrix A. The same is done with q \u2014 user\u2019s utterance in case of dialogue or the question in the QA setting \u2014 using the matrix B. Then, similarly to DSSM and QA-LSTM, the key operation is calculating the similarity between the user\u2019s query and each of the memories: pi = softmax \u0010 uTmi \u0011 (2.8) where softmax(x) = 1 1+exp(\u2212x), u is the embedded user\u2019s query, and mi are system\u2019s memory cells. These similarity scores implement what is originally referred to as reading from memory with attention. That is, the contents of the memory cells contribute to the \ufb01nal answer proportionally to their attention weights pi (we\u2019ll also discuss other variants of the attention mechanism in detail in Section 2.3). Next, MemN2N takes a weighted sum of the memories using the output embedding C and weights pi: ci = xiTC o = P i pici (2.9) The \ufb01nal operation in MemN2N is similar to DSSM or QA-LSTM: a relevance metric is calculated over the user\u2019s query u together with the combined system\u2019s memory state o and all the actions available for the system yi (the set Y includes all the possible responses and API calls): \fChapter 2. Background and Motivation 22 \u02c6 a = softmax \u0010 (o + u)T \u00b7 W(y) \u0011 (2.10) Unlike DSSM and QA-LSTM, cosine similarity is not used as the relevance metric in this case: output matrices W and C serve this purpose instead (as well as A and B, as in end-to-end training, all the model components contribute to the \ufb01nal task). The crucial di\ufb00erence between QA and dialogue setups here is that in the QA task, answers correspond to single vocabulary words, so the projection W produces a distribution over the vocabulary ids. In dialogue however, response candidates are multi-word utterances, with di\ufb00erent sets for training and testing. Therefore, they cannot be predicted as ids, and are instead embedded using the matrix W as shown in Eq. 2.10. Described above is a \u2018single-hop\u2019 MemN2N; the architecture however can be extended to multiple \u2018hops\u2019. Such deep MemN2N is visualised in Figure 2.6 (b), with separate Ai and Bi matrices for every hop. A multi-hop model is basically a stack of base MemN2Ns connected in the following way: the combined system/user state oi + u \u2014 before the output projection \u2014 is passed onto the i + 1\u2019th hop as the input, and the projection W is only applied at the \ufb01nal hop for producing the \ufb01nal answer. The above model was used by Bordes et al. (2017) to train a goal-oriented dialogue response retrieval system only using raw utterances. The model was used within a synthesised dataset bAbI Dialog Tasks, an experimental testbed designed to shed light on the complex problem of goaloriented dialogue management by decomposing it into several tasks of increasing complexity. By showing quite impressive behaviour of MemN2N on synthesised still challenging data (the action set of bAbI Dialog Tasks exceeded 4,000), Bordes et al. demonstrated that goal-oriented dialogue which has traditionally been considered complicated, mission-critical task and relied on an extensive pipeline of components like language understanding, state tracking, response planning (all described earlier in this chapter), and integration with domain-speci\ufb01c APIs can potentially be solved with a single uni\ufb01ed model with raw example dialogues at the input. We are going to look into MemN2N\u2019s performance within the testbed of bAbI Dialog Tasks in Chapter 3 to see how speci\ufb01c surface variations of the user\u2019s input unseen at training time can a\ufb00ect the model\u2019s performance. There emerged more Memory Network-based models later on, including knowledge-based (Ganhotra and Polymenakos, 2018), personalised (Luo et al., 2019), Key-Value MemNNs (Miller et al., 2016), as well as those pipelined in a \u2018retrieve-and-re\ufb01ne\u2019 architecture (Weston et al., 2018). Although response retrieval models gained wide adoption in industrial applications and products because their output is \ufb02uent and more predictable \u2014 dependent on the quality of the corpus of the candidate responses \u2014 their \ufb02exibility is always bounded by the corpus at the same time. This limitation is addressed in a parallel line of dialogue systems research, focused on response \fChapter 2. Background and Motivation 23 generation. Being signi\ufb01cantly harder to train and providing less certainty about the output \ufb02uency, models of this type are theoretically capable of generating novel utterances word-byword and are most extensively studied in academia. We are going to discuss them in detail below. 2.2.2 Response Generation Models Response generation approach to conversation modelling stemmed from an advancement in Neural Machine Translation (NMT), namely the Seq2Seq model (Sutskever et al., 2014). This LSTM-based model was one of the \ufb01rst end-to-end approaches to machine translation, working with raw text tokens as opposed to previous phrase-based (e.g. Koehn et al., 2003) and syntaxbased (e.g. Yamada and Knight, 2001) models. Apart from machine translation, Seq2Seq became a general-purpose model for various NLP tasks due to its versatility: it was able to take a sequence of raw tokens at the input, encode it in a \u2018thought vector\u2019 and produce an output sequence of a length independent from the input (referred to as a many-to-many model). Arguably the most ambitious application of Seq2Seq was in training an open-domain conversation model, the problem that has never been approached before with a single, purely data-driven machine learning model (Vinyals and Le, 2015). As opposed to goal-oriented dialogue with the speci\ufb01c objective to reach the user\u2019s goal with the minimum amount of conversational exchanges, open-domain conversation does not have a certain notion of the goal. Therefore, the initial objective that the dialogue Seq2Seq was trained with is to essentially mimic the responses seen in a large training set of conversations, e.g. a corpus of movie subtitles (the OpenSubtitles corpus of Tiedemann, 2009 was used in the original work). More formally, this objective is expressed as the Maximum Likelihood Estimate (MLE), of the output tokens y1, ..., yT \u2032 given the input tokens x1, ..., xT: p(y1, ..., yT \u2032 | x1, ..., xT) = T \u2032 Y t=1 p(yt | v, y1, ..., yt\u22121) (2.11) The formula above re\ufb02ects the encoder-decoder nature of Seq2Seq: the encoder component (an RNN-based model in the original work) processes the input sequence x1, ..., xT word by word, eventually producing a latent representation of the input v. The decoder component (also RNNbased) produces the output sequence y1, ..., yT \u2032 word by word, at each step based on the internal RNN state and the last generated token. Starting with v as its initial state, the decoder updates and maintains it throughout the generation process. Crucially, the decoder\u2019s output sequence is not limited to any \ufb01xed length \u2014 instead, the decoder is directly trained to produce a special end-of-sequence symbol at which the output e\ufb00ectively ends for all the downstream processing. As we noted above, in the original paper, both encoder and decoder were recurrent \fChapter 2. Background and Motivation 24 Figure 2.7: Sequence-to-sequence conversation model architecture2 networks \u2014 speci\ufb01cally, LSTMs, but in general, the requirements to Seq2Seq components are as follows: 1. the encoder is any \u2018many-to-one\u2019 model, i.e. able to encode an input sequence into a vector (in later versions of the model, a \u2018many-to-many\u2019 model outputting all the intermediate word-by-word encodings is required), 2. the decoder is any \u2018one-to-many\u2019 model able to generate an output sequence out of a single vector. A Seq2Seq conversation model based on LSTMs is visualised in Figure 2.7. As mentioned above, the model is trained to \u2018mimic\u2019 responses from the training dataset by learning from randomly sampled context-response pairs over large conversational corpora. Initially, datasets of movie subtitles were used for that, e.g. Cornell Movie Dialogs dataset (Danescu-NiculescuMizil and Lee, 2011), OpenSubtitles (Lison and Tiedemann, 2016) \u2014 as well as threads of posts on message boards, e.g. Reddit conversations (Baumgartner et al., 2020) and Twitter comments (Sordoni et al., 2015). 2.3 Key Techniques for Dialogue Response Generation Models The Seq2Seq architecture was one of the most transformative advances in dialogue modelling, and it was followed by a multitude of approaches building on top of it and improving it in various ways. Speci\ufb01cally in case of dialogue, there were a number of aspects in which it could be improved as the model did not account for the turn-taking nature of the dialogue and the overall objective of a successful conversation still remained to be formulated better. Therefore, in this section we will be focusing on the key improvements that made Seq2Seq model represent dialogue more adequately. \fChapter 2. Background and Motivation 25 Figure 2.8: HRED model architecture (Serban et al., 2015) 2.3.1 Hierarchical Response Generation Models One of the key improvements of the standard Seq2Seq model addresses the turn-taking nature of dialogue. Initially designed for machine translation, Seq2Seq assumed passing an utterance in the source language at the input, and producing its translation in the target language at the output. This approach was adopted for the response generation by \u2018\ufb02attening\u2019 the dialogue context in one large input sequence. Although better than providing the model with no context at all, \ufb02attening it has the obvious shortcomings that the information about the speakers gets lost, and the overall length of the input becomes extremely high thus resulting in additional challenges with model training and often requiring simply cutting the context quite severely. To address this, Serban et al. (2015) proposed the Hierarchical Recurrent Encoder-Decoder (HRED) model \u2014 see Figure 2.8. In HRED, the encoding procedure is 2-stage: \ufb01rst, every utterance of the context is encoded with an utterance-level encoder RNN into a dense vector. Then, these compact latent utterance representations are encoded using another RNN, producing a dense vector similar to the one at the previous stage. This \ufb01nal dialogue context representation serves as the initial state of the decoder, equivalent to that in the standard Seq2Seq. In addition to the reasons mentioned above, this hierarchical nature of the model also helps training the entire architecture more e\ufb03ciently. Backpropagation through time (BPTT), the technique normally used to train recurrent networks, assumes passing gradients through the RNN\u2019s weights word-by-word throughout the entire input sequence, thus resulting in exponentially small gradient values propagated to the \ufb01rst encoding steps (also referred to as \u2018vanishing gradient\u2019). The hierarchical architecture reduces the length of gradient paths in the computational graphs, thus \fChapter 2. Background and Motivation 26 Figure 2.9: RNN-based autoencoder alleviating this problem. This was especially relevant for \u2018plain\u2019 RNNs which widely su\ufb00er from vanishing gradient (see our discussion of RNNs in Section 2.1.4). 2.3.2 Representation Learning with Autoencoders A key aspect of modelling dialogue is obtaining an e\ufb03cient latent representation of the underlying utterances capturing their meaning and invariant to the surface variations. This representation learning problem is normally tackled with a type of models called autoencoders. Autoencoder (AE) is a model that is trained to attempt to copy its input to its output (Goodfellow et al., 2016). An autoencoder normally consists of two parts: the encoder and the decoder. The encoder produces a latent representation of the model\u2019s input, which then the decoder uses to produce the output, which is ideally the copy of the input. Crucially in this process, the resulting latent representation, i.e. the encoder\u2019s output (also referred to as the bottleneck) is of a dimensionality signi\ufb01cantly lower than that of the input and the reconstructed output. By learning to \u2018compress\u2019 the data into a more compact representation (i.e. of a reduced dimensionality) which still contains enough information for the decoder to reconstruct the input, the autoencoder determines which \u2018features\u2019 of the input are informative and which ones are not and can be ignored. In Figure 2.9 is shown an autoencoder with both the encoder and the decoder being RNNs, a similar model to the one we will be working with in Chapter 7 \u2014 in the picture, z is the latent input representation that is regarded as the main autoencoder\u2019s output. The key question in training an autoencoder is to make its latent space continuous. That is, given a pair of latent vectors for the two inputs seen during training, an autoencoder with a continuous latent space would not only be able to reconstruct those two inputs, but also to produce meaningful reconstructions from arbitrary positions along the line between the encodings. The type of an autoencoder directly aimed at learning continuous latent spaces is based on the Bayesian methods and is called the Variational Autoencoder (VAE, Kingma and Ba, 2015; Rezende et al., 2014). VAE represents the encoding z as a continuous latent variable produced by the probabilistic recognition model (the counterpart of AE\u2019s deterministic encoder) approximating the \fChapter 2. Background and Motivation 27 Figure 2.10: RNN-based VAE posterior distribution of this latent variable given the input q(z | x), in practice usually a diagonal Gaussian. The actual reconstruction is then obtained by running the generation model (the counterpart of AE\u2019s decoder) conditioned on a sample from q(z | x). The variational counterpart to the RNN autoencoder is shown in Figure 2.10 \u2014 the recognition model there consists of an RNN encoder and a linear projection layer generating parameters \u00b5 and \u03c3 of the q(z | x) Gaussian, a sample from which then goes into the generation model. As opposed to the MLE optimisation objective of a regular AE, VAE is trained to optimise the Evidence Lower Bound (ELBO) objective which looks as follows: L(x) = \u2212KL(q(z | x) || p(z)) + Eq(z|x) log p(x | z) \u2264log p(x) (2.12) where p(z) is the prior distribution of z usually set to a standard Gaussian with \u00b5 = 0, \u03c3 = 1, and log p(x) is the data likelihood, which AE optimises. The \ufb01rst term of the formula is KullbackLeibler divergence between the approximate posterior and the prior: KL(q || p) = X x\u2208X q(x) log q(x) p(x) (2.13) where X is the probability space q and p are de\ufb01ned on. KL divergence provides an expectation of the likelihood ratio between the two distributions (second term) with respect to q(x). The intuition behind it is measuring how much more probable the data is under one distribution than under the other. In the VAE optimisation objective, the KL term keeps the model from making q(z | x) deterministic (which it could become if the recognition network learned to output \u03c3 close to 0) and instead forces it to be closer to the prior p(z). In practice, penalising the divergence between the prior and posterior comes with a challenge: early into the training, the KL term tends to reduce to zero, essentially making it identical to the prior. This problem is known as the \u2018vanishing KL term\u2019 problem (Bowman et al., 2016). \fChapter 2. Background and Motivation 28 Figure 2.11: The DI-VAE/DI-VST models (Zhao et al., 2018) In the context of the RNN-based models which are extremely prone to over\ufb01tting, the source of stochasticity (i.e. latent variable z) can be over\ufb01tted as well, so it does not matter what information is encoded in it as the network adapts itself to noise. Such a model would essentially converge to its non-variational and severely over\ufb01tted counterpart. Therefore, the optimisation of the ELBO objective is highly problematic in practice. In order to facilitate learning, Bowman et al. (2016) propose the \u2018KL term annealing\u2019 technique. KL weight annealing imposes a weight upon the KL term that is set to zero at the beginning of training, e\ufb00ectively leaving the model with a regular MLE objective. After a certain number of epochs of initial \ufb01tting to data, the annealing weight starts growing up on the interval [0, 1], eventually ending up with the objective as in Eq. 2.12. However, in practice, KL annealing is itself very unstable and needs careful \ufb01ne-tuning of the annealing schedule. Therefore, more principled techniques for ELBO optimisation were introduced later on \u2014 the most notable is the approach of mutual information maximisation between the input and the latent variable (Zhao et al., 2019) and the version of it for discrete latent variables (Zhao et al., 2018). The latter is called Discrete Information VAE (DI-VAE) and is shown in Figure 2.11. It is a variant of a VAE with two modi\ufb01cations. Firstly, its optimisation objective accounts for the mutual information I between the input and the latent variable which is found to be implicitly discouraged in the original VAE objective (see the authors\u2019 derivation in Eqs. 2.14 and 2.15). LVAE = Ex h EqR(z|x) \u0002log pG(x | z)\u0003 \u2212KL(q(z) || p(z)) i = Eq(z|x)p(x) \u0002log pG(x | z)\u0003 \u2212I(Z, X) \u2212KL(q(z) || p(z)), (2.14) LDI-VAE = LVAE + I(Z, X) = EqR(z|x)p(x) \u0002log pG(x | z)\u0003 \u2212KL(q(z) || p(z)) (2.15) where x is the input utterance, z is the latent variable (X and Z corresponding to their batchwise vectors), R and G are the recognition and generation models (implemented as RNNs) \fChapter 2. Background and Motivation 29 respectively, and q(z) = Ex[qR(z | x)]. Secondly, the latent variable z in DI-VAE is discrete as opposed to the continuous one in a regular VAE. The discrete latent code lends itself well to interpretation and can be used e.g. to cluster inputs by the values of the certain \u2018bits\u2019 of their latent codes. The discrete nature also makes the calculation of the KL term more tractable via the Batch Prior Regularisation technique (Zhao et al., 2018): KL(q\u2032(z) || p(z)) = K X k=1 q\u2032(z = k) log q\u2032(z = k) p(z = k) (2.16) where K is the number of z\u2019s possible values and q\u2032(z) is the approximation to q(z) over N data points: q\u2032(z) = 1 N N X n=1 qR(z | xn) (2.17) In addition to DI-VAE, Zhao et al. (2018) introduced DI-VST, DI-VAE\u2019s counterpart working in a Variational Skip-Thought manner (Hill et al., 2016). Speci\ufb01cally, it reconstructs the input x\u2019s previous (xp) and next (xn) context utterances with the corresponding independent generation models Gp and Gn: LDI-VS T = EqR(z|x)p(x)[log pn G(xn | z)pp G(xp | z)] \u2212KL(q(z) || p(z)) (2.18) We are going to use DI-VAE and DI-VST later in Chapter 4, as well as empirically compare them to the \u2018classic\u2019 VAE on a downstream dialogue response generation task. 2.3.3 Latent Variable Models for Dialogue Response Generation Apart from resulting in a continuous latent space, the use of a latent variable has another intuitive bene\ufb01t for dialogue \u2014 increasing the diversity of responses. In chat-oriented dialogue, low diversity and insu\ufb03cient informativeness in a common problem with Seq2Seq models. Given the extremely high variance of responses given their contexts in the training corpora (more on those later in Section 2.7), the regular MLE optimisation objective of such models forces the \fChapter 2. Background and Motivation 30 Figure 2.12: Latent variable Seq2Seq model (Cao and Clark, 2017) model to learn the \u2018corpus-average\u2019, i.e. most recurring responses in the training set which usually are \u2018I don\u2019t know\u2019, \u2018I\u2019m not sure\u2019 and the like (Jiang and de Rijke, 2018). A series of approaches addressed that problem, e.g. Li et al. (2016a) used maximisation of mutual information (MMI) between the context and response as part of the objective function; Zhang et al. (2018) formulated it in terms of the Variational Information Maximisation Objective (VIMO) and approached the problem under the adversarial learning framework. A widely used technique to increase diversity of responses was using random sampling in the Seq2Seq decoder (Ippolito et al., 2019), i.e. drawing a random entry from the model\u2019s probability distribution for every output token. But this often resulted in degraded generation performance as the MLE objective by de\ufb01nition only assumes optimising for the most probable output, and even the 2nd most probable word can be irrelevant or dis\ufb02uent in most cases. In turn, latent variable models allow to directly incorporate non-determinism into the model as well as into the training objective. From the dialogue perspective, it means that for every dialogue context, there can potentially be multiple correct responses, and the model will optimise for all of them. From the Bayesian optimisation point of view, that means learning the posterior distributions over these responses instead of point-wise optimisation in case of MLE, as we discussed with VAEs previously. Such models were introduced by Serban et al. (2017b) and Cao and Clark (2017) \u2014 the latter is visualised in Figure 2.12, we are going to discuss it below. Under this approach, the output probability gets conditioned on (in addition to the input as in regular MLE training): P(Y | X) = Z z P(Y | z, X)P(z)dz (2.19) \fChapter 2. Background and Motivation 31 Figure 2.13: The LAED architecture (training) where P(z) = N(0, In) is usually set to a standard Gaussian prior. This conditional probability denotes a distribution of correct responses for a given input mentioned above, and as the value for z is sampled, the speci\ufb01c response from the distribution gets picked for further generation. With the presence of the system\u2019s output Y di\ufb00erent from the input X (as opposed to the VAE case), the ELBO objective takes the following form: log P(Y | X) \u2265\u2212KL(Q(z | X, Y) || P(z)) + Ez\u223cQ log P(Y | z, X) (2.20) where Q is the proposal (or variational) distribution which is used to approximate the posterior P(z | X, Y) during the optimisation. The Discrete-Information VAE technique we discussed previously in Section 2.3.2 was integrated into an encoder-decoder architecture, resulting in the Latent Action Encoder-Decoder (LAED) model (Zhao et al., 2018) \u2014 see Figure 2.13. The main part of LAED is the encoderdecoder Seq2Seq model which (1) encodes the dialogue context c (including the user\u2019s last turn) with a hierarchical recurrent encoder and then produces an approximate posterior p\u03c0(z | c) of the latent variable z using its \u2018policy\u2019 feed-forward network \u03c0, (2) decodes the system\u2019s response y using samples from p\u03c0(z | c) and c. Crucially though, at training time, it works in a multi-task setup with a DI-VAE/DI-VST model whose recognition model R (separate from the main model\u2019s encoder) encodes the response and approximates a posterior qR(z | y). It then reconstructs either the response itself (DI-VAE), or its previous+next context utterances (DI-VST) using its generation model, also separate from the main task\u2019s decoder. During training, the main task\u2019s decoder uses samples from qR(z | y) from the auxiliary DI-VAE/DI-VST instead of p\u03c0(z | c), and the policy \u03c0 only learns to reproduce qR(z | y) via the MLE objective: Ep(y|c) \u0002qR(z | y)\u0003. \fChapter 2. Background and Motivation 32 Figure 2.14: Visualisation of attention in machine translation (Olah and Carter, 2016) LAED training objective is as follows: LLAED = EqR(z|y)p(c,y) \u0002log p\u03c0(z | c) + log pF (y | z, c)\u0003 (2.21) where F is the recurrent decoder. We are going to look at the potential of LAED\u2019s latent codes for dialogue knowledge transfer across datasets in Chapter 4. 2.3.4 Attention Mechanism The most widely-used modi\ufb01cation to the standard encoder-decoder architecture came, as the original model itself, from machine translation: the attention mechanism introduced by Bahdanau et al. (2015) addressed the problem of the insu\ufb03cient information coming to the decoder from the \ufb01nal encoder state. While signi\ufb01cantly more e\ufb03cient than RNNs, LSTM/GRU models still do not produce the perfect representation of the input sequence for further decoding from it, and more importantly, this representation is static, while intuitively the translation takes place in segments. For example, for generating one noun phrase the translating decoder in Figure 2.14 (Olah and Carter, 2016) would have to \u2018attend\u2019 mainly to this phrase at the input \u2014 that is, the translation of the phrase \u2018zone \u00e9conomique europ\u00e9enne\u2019 can be done without knowing its left or right contexts. Attention is visualised in Figure 2.15: as opposed to the original Seq2Seq model only passing the \ufb01nal encoder state as the context for the decoder, this approach preserves all the intermediate encoder states and passes them all weighted with the corresponding alignment scores learned as part of the end-to-end training procedure. Alignment scores \u03b1t,i tell how much information the decoder can infer from the encoder state hi (i.e. having encoded the tokens x1, ..., xi) while generating the tth output token. \u03b1t,i = exp(score(st\u22121, hi)) Pn i\u2032=1 exp(score(st\u22121, hi\u2032)) (2.22) \fChapter 2. Background and Motivation 33 Figure 2.15: Attention mechanism (Bahdanau et al., 2015) where st\u22121 is the previous decoder state, h is the encoder state, and score is a scoring function implemented as a feed-forward neural network (among a series of other implementations) with trainable weights Wa and va: score(st, hi) = va T tanh (Wa[st; hi]) (2.23) At the tth decoding step, the decoder receives the information from all the encoder states as a weighted mixture of the following form: ct = n X i=1 = \u03b1t,ihi (2.24) Luong et al. (2015) also explore a range of similar models. Speci\ufb01cally, global attention, the closest to that of Bahdanau et al. (2014), de\ufb01nes the alignment score for tth step via st instead of st\u22121, as well as considering several variants of the score function: score(st, hi) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 s\u22ba t hi dot s\u22ba t Wahi general vaT tanh (Wa[st; hi]) concat (2.25) The alternative local attention, instead of calculating alignment scores of the decoder state with all the encoder\u2019s states, uses a \ufb01xed window [pt \u2212D, pt + D] where pt is the position within the \fChapter 2. Background and Motivation 34 encoded sequence predicted by the model, and D is a constant set empirically. The authors use 2 ways of calculating pt: monotonic alignment where pt = t, and predictive alignment de\ufb01ned as follows: pt = S \u00b7 sigmoid \u0010 v\u22ba p tanh(Wpht) \u0011 (2.26) where Wp and vp are trainable parameters, and S is the input sentence\u2019s length. In addition, in order to favour alignment scores around pt, the authors place a Gaussian distribution centered around it on \u03b1: \u03b1\u2032 t,i = \u03b1t,i exp \u2212(s \u2212pt)2 2\u03c32 ! (2.27) where \u03c3 = D 2 . Experimental evaluation on English-to-German machine translation task showed that local attention with predictive alignment was superior to other individual approaches, however the best result they obtained was with an ensemble of 8 models with di\ufb00erent attention mechanisms and training aspects. Although motivated by machine translation, attention was widely used for dialogue generation, both chat-oriented and goal-oriented. For the latter, this mechanism was in the core of another key technique \u2014copy-augmented decoding. 2.3.5 Copy-Augmented Decoding Copy-augmented Seq2Seq models (Eric and Manning, 2017; Zhao and Esk\u00e9nazi, 2018) are used for addressing rare or out-of-vocabulary (OOV) words or some content of the user\u2019s query that needs to be re\ufb02ected in the response as well, e.g. in the a\ufb03rmation that all the details of the user\u2019s request are received, and the processing is started. Attention-based copying is based on the idea of pointer networks (Vinyals et al., 2015) which produce a permutation of the input tokens as their output and were primarily aimed at solving the problem of sorting the input sequence and various combinatorial optimisation problems. In dialogue as well as other NLP tasks, hybrid models combining the \u2018pure generation\u2019 and the \u2018pure pointer\u2019 mechanisms are used, and here we will focus on Pointer-Generator Networks (See et al., 2017) shown in Figure 2.16. Pointer-generator model de\ufb01nes the probability of generating the word w at a certain decoding step as the following mixture: \fChapter 2. Background and Motivation 35 Figure 2.16: Pointer-Generator network architecture (See et al., 2017) P(w) = pgenPvocab(w) + (1 \u2212pgen) X i:wi=w at,i (2.28) where Pvocab is the probability of generating the word from the decoder\u2019s vocabulary under the conventional decoding procedure, and at i are the attention alignment scores of all the encoder states wherever the word w is observed at the input. Finally, the mixture parameter pgen is de\ufb01ned as follows: pgen = \u03c3 \u0010 wT h\u2217h\u2217 t + wT s st + wT x xt + bptr \u0011 (2.29) where h\u2217 t is the attention context vector (equivalent to ct in the attention derivation above), st is the decoder state, and xt is the decoder input \u2014 all at the decoding step t, and wh\u2217, ws, wx, bptr are trainable parameters. In this approach, the decoder works with an \u2018extended vocabulary\u2019 which is the union of the original vocabulary and all the words appearing at the input. Therefore, for an OOV word, Pvocab(w) = 0, but if it is present at the input, it can still be transferred to the output if its attention weight is high. On the other hand, if this word is not present at the input, P i:wi=w at i = 0, and it can only be generated based on the decoder\u2019s internal state as Pvocab(w). The particular implementation described above was used for hybrid abstractive/extractive document summarisation, but in general, the copy-augmented decoding technique is widely used in dialogue, semantic parsing (Jia and Liang, 2016), and language modelling (Merity et al., 2017). \fChapter 2. Background and Motivation 36 Figure 2.17: Pointer-Sentinel Mixture Model (Merity et al., 2017) The latter \u2014 Pointer Sentinel Mixture Models (PSMs) (see Figure 2.17) \u2014 we are going to use in Chapter 4. As seen in the \ufb01gure, PSM extends the pointer distribution with a sentinel element which is used to redistribute the probability mass in case of low con\ufb01dence of the model\u2019s pointer part. The intuition is as follows: the lowest-con\ufb01dence case of a \u2018vanilla\u2019 copy model \u2014 a near-uniform distribution derived from the attention scores \u2014 is supposed to be avoided in the PSM by putting the most of the probability mass on the sentinel. Then, this sentinel also working as a gating function in the hybrid pointer-generator prediction (see g in Figure 2.17 and pgen in Eq. 2.28) \u2014 will control the model\u2019s \ufb01nal prediction. As such, g can take values in the interval [0, 1] representing the mixture parameter between the pointer and generator distributions, so that when g = 0, the next word is predicted from the pointer distribution only, and when g = 1, the model\u2019s con\ufb01dence in pointer distribution is the lowest, and it falls back to generator-only prediction mode. Apart from the enhancements of the Seq2Seq model, attention was also used as a key component of a text representation itself \u2014 speci\ufb01cally, of a hierarchical text representation model for document classi\ufb01cation (Yang et al., 2016). Under their approach, two attention types were calculated \u2014 utterance-level attention representing the contribution of a speci\ufb01c word in the overall meaning of a sentence, and document-level attention representing a similar relation for individual sentences within the document. Later on, attention-based approach to text representation developed into a separate technique called self-attention which we are going to describe below. 2.3.6 Self-Attention The original attention mechanism brought the idea of calculating the alignment scores between di\ufb00erent parts of a sequential model\u2019s internal state. Self-attention introduced by Vaswani et al. \fChapter 2. Background and Motivation 37 (2017) uses the idea of alignment as the main means of representing sequential input: instead of directional word-by-word encoding, self-attention assumes computing alignment scores of every word to every other one at the input, thus producing a representation of every word in the global context of the entire input sequence. More formally, the alignment scores are calculated via 3 quantities associated to every word xi: \u2018key\u2019, \u2018query\u2019, and \u2018value\u2019 (ki, qi, vi, respectively). The model obtains those via the following 3 respective projection matrices: qi = xT i WQ ki = xT i WK vi = xT i WV (2.30) The mathematical formulation of self-attention is largely similar to that of the original attention mechanism, with the main di\ufb00erence being that it is de\ufb01ned via ki, qi, and vi. As such, the self-attention score of how a word xj a\ufb00ects the target word xi is calculated as follows: scorei,j = qi \u00b7 kj \u221adk (2.31) where dk is the dimensionality of the key vectors. This operation is referred to by the authors as scaled dot-product attention. The second step is similar to the original attention \u2014 all the self-attention scores for xi are softmax-normalised into alignment weights: \u03b1i,j = exp(scorei,j) Pn k=1 exp(scorei,k) (2.32) Finally, the \ufb01nal representation of a word xi is obtained as a mixture of all the input words\u2019 values vj weighted by their alignment weights given the target word: zi = n X j=1 \u03b1i,j \u00b7 vj (2.33) Self-attention representations of individual words zi are then fed into a feed-forward network. This architecture is then duplicated in the form of several sub-models, with separate matrices WQ, WK, and WV. The \ufb01nal representations produced by these sub-models are referred to as selfattention heads. As reported in the original paper, these independently initialised and trained heads produce di\ufb00erent representation \u2018subspaces\u2019 which may account for di\ufb00erent linguistic phenomena, e.g. anaphoric links or syntactic dependencies. \fChapter 2. Background and Motivation 38 Figure 2.18: Transformer encoder-decoder architecture (Vaswani et al., 2017) Self-attention mechanism lies in the core of the Transformer encoder-decoder model (shown in Figure 2.18). Both encoder and decoder of the Transformer use a similar logic to the one described above, with the encoder\u2019s output serving as the input representation and decoder\u2019s output being fed into the additional linear + softmax layers (together referred to as the \u2018language modelling head\u2019) used for generating the output probability distributions over the vocabulary tokens. More speci\ufb01cally, the decoder generates the output words token-by-token, just like the Seq2Seq model, but by attending to (1) the outputs of the stack of encoders and (2) the generated sequence up to the current timestep. Transformers set the new state-of-the-art in a series of NLP tasks thus largely replacing RNNbased architectures as the main way of producing robust text representations. Self-attention architecture started a new generation of text models using the bene\ufb01ts of global-context word representation and increased parallelism of self-attention pipeline over recurrent sequential processing. The most notable of those models are Bidirectional Encoder Representations from Transformers (BERT) by Devlin et al. (2019) and Generative3 Pretrained Transformers (GPT/GPT-2) by Radford et al. (2018) \u2014 trained on massive amounts of data and designed to be e\ufb03ciently trained for a variety of downstream tasks, they largely enabled the transfer learning paradigm in NLP which will be discussed further. \fChapter 2. Background and Motivation 39 Figure 2.19: Word2Vec model architecture (Mikolov et al., 2013) 2.4 Transfer Learning in NLP and Dialogue Modelling Transfer learning is a direction in machine learning that assumes training one model for a speci\ufb01c task and then re-using the knowledge it has learned, partly or fully, on another task. The initial model is called the base model, and it is trained from large general-purpose datasets at the \ufb01rst stage called pre-training. At the second stage \u2014 \ufb01ne-tuning \u2014 the base model (or its core part, e.g. utterance/dialogue encoder) is further trained for the target task, normally with additional task-speci\ufb01c parts introduced. Transfer learning came to NLP from the Computer Vision (CV) community, where large-scale training of image recognition models became a common practice via ImageNet, the visual recognition challenge and the corresponding dataset (Deng et al., 2009). State-of-the-art ImageNet models, e.g. ResNet (He et al., 2016) and VGG (Simonyan and Zisserman, 2015) became widely used in the research community, and it soon became apparent that they can be adapted to a wide range of CV problems by \ufb01ne-tuning to small in-domain datasets, eventually signi\ufb01cantly outperforming the corresponding models trained from scratch. 2.4.1 Word Embedding Models Similarly to CV with ImageNet, NLP experienced a transformation with word embeddings which were the \ufb01rst widely transferable resources. Embeddings provide an e\ufb03cient alternative to \u20181-hot\u2019 representation widely used before. 1-hot representation of a word assumes having a vocabulary-sized binary vector of zeros, with a single \u2018one\u2019 corresponding to the word\u2019s index in the vocabulary. Embeddings improved on that by representing words as real-valued vectors 3Here, \u201cgenerative\u201d is used in the sense of predicting (or generating) the next word given the context, i.e. language generation. We will also use this sense of the term \u201cgenerative\u201d later in Chapter 5. \fChapter 2. Background and Motivation 40 Figure 2.20: Example relations between Word2Vec and GloVe vectors in a low-dimensional trainable space. Word embedding training objective is normally based on modelling a certain relationship between a word and its context (we will go into detail later) \u2014 this allows them to be trained in a \u2018self-supervised\u2019 fashion from internet-scale amounts of raw unannotated data. In Figure 2.19 is shown Word2Vec, one of the earliest and most widely-used word embedding models (Mikolov et al., 2013). The 2 versions of it: Continuous Bag-of-Words (CBoW) and Skip-gram \u2014 show that it can be trained either to predict the word from its surrounding context, or vice versa. The model\u2019s projection (or embedding) matrix that maps a 1-hot word vector into the latent space is the main result of the training. Another early embedding model \u2014 GloVe (Pennington et al., 2014) \u2014 uses word co-occurrence counts to predict the co-occurrence probability of a pair of words given their corresponding embedding vectors \u2014 the training objective makes the dot-product operation over a pair of word embedding vectors produce that probability. As multiple observations showed, both models\u2019 embedding spaces are able to encode di\ufb00erent linguistic relations between words, e.g. \u2018male \u2014 female\u2019 or \u2018company \u2014 CEO\u2019, see Figure 2.204 for a visualisation. Word representations produced by Word2Vec and GloVe gained extremely high popularity in the NLP community and were used to improve the performance of models in numerous tasks. Their principal shortcoming though was in the fact that they did not take word context while encoding it. That results in the inherent inability to encode sequences of words (e.g. utterances and paragraphs) \u2014 so that workaround approaches like \u2018mean-vector embedding\u2019 were used to represent phrases (i.e. the embedding of the phrase is the element-wise mean of the embeddings of individual words). Another problem is that contextless models cannot handle polysemous words like \u2018book\u2019, \u2018\ufb02y\u2019, \u2019like\u2019: the resulting representations of such words will correspond to 4Image credit: Renu Khandelwal (http://tiny.cc/w2v_glove_medium) \fChapter 2. Background and Motivation 41 Figure 2.21: BERT \u2018pretrain-\ufb01netune\u2019 architecture (Devlin et al., 2019) their most frequent sense in the dataset. The solution to these problems came with the next generation of embeddings based on sequence models. 2.4.2 Contextual Word Embeddings Neural models for sequential data, e.g. LSTMs and Transformers brought the next generation of word embeddings. As such, there were introduced models like ELMo (Peters et al., 2018), ULMFiT (Howard and Ruder, 2018), and BERT Devlin et al. (2019). Trained as language models, they provided what is referred to as a \u2018contextual\u2019 representation (as opposed to the \u2018static\u2019 embeddings by Word2Vec or GloVe). Both models can be e\ufb03ciently used in the downstream tasks by either replacing the \u2018language modelling head\u2019 with a task-speci\ufb01c one or training the two in a multi-task setup. Among all the contextual embedding models, Transformer-based ones resulted in arguably the widest impact on the NLP community \u2014 speci\ufb01cally, BERT already mentioned above was designed for use in downstream tasks in a \u2018pretrain-\ufb01netune\u2019 fashion (shown in Figure 2.21). Speci\ufb01cally, this versatility is achieved by (1) organising the input in 2 parts, i.e. sentence A and sentence B (which may correspond to question/answer or context/response pairs in a downstream task) (2) pre-training in the multitask setup using the two objectives: Language Modelling (LM) and the Next Sentence Prediction (NSP). For the LM objective, BERT uses the notion of the Masked Language Model (MLM) which assumes predicting randomly chosen tokens of the input based on the full encoded representation of the input sequence, i.e. using both the left and the right contexts. The NSP task makes the model learn the relation between the 2 input sentences \u2014 in the base case, whether sentence B actually follows sentence A in the original text or it is a distractor. This secondary task helps improve the model\u2019s robustness to noise (if sentence B is set to be a randomly drawn distractor sentence) as well as make the model highly versatile in a variety of downstream classi\ufb01cation-like tasks, e.g. Question Answering, Natural Language Inference, response retrieval and ranking. \fChapter 2. Background and Motivation 42 Upon introduction, the original BERT achieved state-of-the-art results on 11 NLP tasks and was one of the most extensively used base models for text representation in transfer learning for NLP. More BERT variations followed up: for example, Robustly Optimized BERT approach (RoBERTa , Liu et al., 2019) trained with dynamic MLM masking, larger batch size, and disabled NSP task during pretraining; SpanBERT (Joshi et al., 2020) modifying MLM to mask and predict the entire spans of input instead of individual tokens; DistilBERT (Sanh et al., 2019), a version of BERT reduced in size by 40% while retaining 97% of the original model\u2019s accuracy, obtained via knowledge distillation. 2.4.3 Transfer Learning for Dialogue Following wide success in fundamental NLP tasks (e.g. language modelling, semantic role labelling, coreference resolution), transfer learning techniques started emerging in dialogue. The problem of dialogue system\u2019s domain adaptation was posed in Dialog State Tracking Challenge 3 which was focused on adapting a goal-oriented dialogue state tracker to a new domain using a small set of seed in-domain data (Williams et al., 2016) \u2014 that can be considered a transfer learning task, but from today\u2019s perspective, both pre-training and \ufb01ne-tuning stages were bound to a very speci\ufb01c and narrow domain thus limiting the scale of the transfer. Nevertheless, several notable methods emerged from the challenge, all RNN-based ( Henderson et al., 2014d; Mrksic et al., 2015). Later, Williams et al. (2017) introduced the HCN model (described earlier in Section 2.1.2) that is designed for 2-stage training: the initial training stage from moderate amounts of data, and the consequent \ufb01ne-tuning stage in an RL setup from interactions with real users or via an interactive human-in-the-loop process under the Conversation Learner framework (Williams and Liden, 2017). More recently, a major shift in transfer learning for dialogue was brought by the Transformer model described above. Speci\ufb01cally, the introduction of Transformed-based GPT and GPT2 (Radford et al., 2018) set the new state-of-the-art in human-like language generation. GPT models: \u2014 are implemented using restricted unidirectional self-attention (where a token can only attend to its left context, as opposed to both left and right with e.g. BERT), \u2014 consist of massive sets of parameters (the largest GPT-2 model has 1.5 billion parameters), \u2014 are pre-trained at a large scale on WebText corpus consisting of millions of documents, \u2014 set the new state-of-the-art on a number of language modelling tasks, while still under\ufb01tting on the WebText corpus, as reported by the authors. \fChapter 2. Background and Motivation 43 GPT/GPT-2 employ language-modelling pre-training which allows them to be re-used in a number of generation tasks, including conversational response generation. They were rapidly adopted in the dialogue community, and a number of \u2018pretrain-\ufb01netune\u2019 approaches to dialogue followed, most notable of which are (1) TransferTransfo (Wolf et al., 2019), the winning submission at ConvAI2 challenge on persona-based chat-oriented dialogue as per the automatic metrics, (2) goal-oriented dialogue generation approach by Budzianowski and Vulic (2019), and (3) DialoGPT, an open-domain model by Zhang et al. (2020) pretrained at a large scale on the Reddit Conversations dataset. The latter achieved human-like performance in the 1-turn Turing test evaluation with humans. In Chapter 5, we are going to use GPT-2 as a base model in a \u2018pretrain-\ufb01netune\u2019 framework for dialogue domain adaptation, in both information-seeking and strictly task-oriented setups. Models like ELMo, BERT, and GPT-2 showed that transfer learning can be applied in NLP as well as in vision. A number of NLP tasks have already experienced the bene\ufb01ts of transfer learning, and it is now considered among the best practices to approach new NLP problems via transferring those models\u2019 knowledge in a \u2018pretrain-\ufb01netune\u2019 fashion instead of training from scratch. In dialogue modelling, the bene\ufb01ts of transfer learning are starting to emerge so that conversation models become more human-like in naturalness, coherence, and appropriateness of their utterances \u2014 as well as in goal-oriented dialogue where those models need signi\ufb01cantly less training data to start working with reasonable accuracy. In the next section, we are going to discuss the intuition of what aspects of dialogue can be transferred across domains and datasets. 2.4.4 Dialogue Transfer Learning Intuition: Lexical and Interactional Dialogue Similarity With the variety of models and techniques of knowledge transfer presented above, it is important to have an actual intuition of why transfer learning applies to dialogue and what exactly can be transferred. There are two major aspects in which dialogues can vary, but nevertheless, lead to similar meanings: interactional and lexical. Interactional similarity is analogous to syntactic similarity \u2014 when two distinct sentences have e\ufb00ectively identical meaning \u2014 except that it occurs not only at the level of a single sentence, but at the dialogue or discourse level. Figure 2.22 shows examples of interactional variants that lead to very similar \ufb01nal contexts, in this case, that the user wants to buy an LG phone. These dialogues can be said to be e\ufb00ectively similar for this domain. Lexical similarity, on the other hand, holds among utterances, or dialogues, when di\ufb00erent words (or sequences of words) express meanings that are su\ufb03ciently similar in a particular domain (see again Figure 2.22). Unlike syntactic or interactional ones, lexical similarity is domainspeci\ufb01c \u2014 that is, a pair of dialogues which are equivalent sequences of dialogue acts, each one \fChapter 2. Background and Motivation 44 USR: I would like an LG laptop sorry uhm phone SYS: okay. USR: I would like a phone by LG. SYS: sorry a what? USR: a phone by LG. SYS: okay. SYS: what would you like? USR: an LG phone SYS: okay. SYS: what would you like? USR: a phone SYS: by which brand? USR: LG SYS: okay SYS: you\u2019d like a ...? USR: a phone SYS: by what brand? USR: LG. SYS: okay SYS: so would you like a computer? USR: no, a phone. SYS: okay. by which brand? USR: LG. SYS: okay. Figure 2.22: Interactional variations in a shopping domain is represented with equivalent syntactic structures, are not considered lexically similar if one has to do with restaurant search and the other with booking movie tickets. The intuition behind learning dialogue representations from data that are transferable across datasets and domains is capturing these similarities in a general \u2018dialogue footprint\u2019. In this way, semantically similar dialogues with the same footprint would be clustered together \u2014 either by their embedding in the latent space or by explicit meaning representations e.g. derived from a semantic parser. In case of linguistically informed models, Eshghi and Lemon (2014) developed a method similar to Kwiatkowski et al. (2013) for capturing lexical similarity by creating clusters of semantic representations based on observations that those clusters correspond to similar non-conversational actions observed within a domain (e.g. a database query, a \ufb02ight booking, or any API call). Distributional methods could also be used for this purpose (Lewis and Steedman, 2013). In general, this kind of clustering is achieved when the domain-general semantics resulting from semantic parsing is grounded in a particular domain. We note that while interactional similarity in dialogue can be accounted for by semantic grammars or formal models of dialogue structure (such as DS-TTR, Eshghi et al., 2012 or KoS, Ginzburg, 2012), lexical similarity relations have to be learned from data. 2.5 Linguistically Informed Models of Dialogue Linguistic resources are a major source of prior knowledge for dialogue models and in NLP tasks in general. In the setting of data-e\ufb03cient training, where a model is limited in what it can learn examples, it is especially important to incorporate prior knowledge in it, in the form of e.g. an ontology or a grammar. In dialogue systems, there exist approaches making use of linguistic knowledge of various kinds \u2014 e.g. Ramachandran and Ratnaparkhi (2015) proposed a method to represent and track the dialogue state via Relational Trees built on top of Knowledge \fChapter 2. Background and Motivation 45 [ event : es p1=today(event) : t ] 7\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 event=arrive : es p1=today(event) : t p2=pres(event) : t x=robin : e p3=subj(event,x) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb 7\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 event=arrive : es p1=today(event) : t p2=pres(event) : t x=robin : e p3=subj(event,x) : t x1 : e p3= from(event,x1) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb 7\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 event=arrive : es p1=today(event) : t p2=pres(event) : t x=robin : e p=subj(event,x) : t x1=S weden : e p3=from(event,x1) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u201cA: Today\u201d 7\u2192 \u201c..Robin arrives\u201d 7\u2192 \u201cB: from?\u201d 7\u2192 \u201cA: Sweden\u201d Figure 2.23: Incremental parsing with DS-TTR/DyLan Base entities extracted from the user\u2019s utterances; social dialogue model of Curry et al. (2018) bene\ufb01ted from the use of an ontology and entity linking as well. In this section, we are going to give an overview of an approach to modelling dialogue entirely based on explicit linguistic representations \u2014 speci\ufb01cally, a formal semantic grammar \u2014 and discuss its applicability to low-resource dialogue system bootstrapping. 2.5.1 Dynamic Syntax and Type Theory with Records (DS-TTR) Dynamic Syntax (DS) is an action-based, word-by-word incremental and semantic grammar formalism (Kempson et al., 2001; Cann et al., 2005), especially suited to the highly fragmentary and context-dependent nature of dialogue. This formalism is implemented in the incremental semantic parser for dialogue processing \u2014 DyLan5 (Eshghi et al., 2011; Eshghi, 2015; Purver et al., 2011). In DS, words are conditional actions \u2014 semantic updates, and dialogue is modelled as the interactive and incremental construction of contextual and semantic representations (Eshghi et al., 2015) \u2014 see Figure 2.23. The contextual representations provided by DS are \ufb01ne-grained and jointly agreed upon by the interlocutors, as a result of processing questions and answers, clari\ufb01cation interaction, acceptances, self-/other-corrections, restarts, and other characteristic linguistic phenomena in dialogue \u2014 see Figure 2.24 for an example of how selfcorrections and clari\ufb01cation requests are processed via a backtrack and search mechanism over the parse search graph (Hough, 2011; Hough and Purver, 2014a; Eshghi et al., 2015). Generation/surface realisation in DS is de\ufb01ned using trial-and-error parsing (see Section 2.5.2), guided by a generation goal, i.e. the semantic representation of the utterance to be generated. Generation thus proceeds word-by-word, similar to parsing (Purver et al., 2014; Hough, 2014). Therefore, DS allows to track not only the semantic content of the current turn while it is being constructed word-by-word (during either parsing or generation) but also the context of the conversation as a whole, with the grounded/agreed content of the conversation encoded as well, see e.g. Figure 2.25 (Eshghi et al., 2015; Purver et al., 2010). Crucially for the dialogue model to be described below, the inherent incrementality of DS-TTR together with the word-level, as well 5DyLan is derived from \u201cDynamics of Language\u201d \fChapter 2. Background and Motivation 46 Figure 2.24: Processing self-corrections and clari\ufb01cation requests with DS-TTR/DyLan as cross-turn, parsing constraints it provides, enables word-by-word exploration of the space of grammatical dialogues, and the semantic and contextual representations that result from them. Type Theory with Records (TTR) is an extension of standard type theory used in semantics and dialogue modelling (Cooper, 2005; Ginzburg, 2012). To support dialogue processing and allow for richer representations of the dialogue context, DS and the TTR framework were integrated (Purver et al., 2010; Purver et al., 2011; Eshghi et al., 2012). In TTR, logical forms are speci\ufb01ed as record types (RTs), sequences of \ufb01elds of the form [l : T ] containing a label l and a type T. RTs can be witnessed (i.e. judged as true) by records of that type, where a record is a sequence of label-value pairs [l = v], and [l = v] is of type [l : T ] in case v is of type T (see Figure 2.23 for example record types). A record type can be inde\ufb01nitely extended, which naturally allows for representing incrementally growing meaning representations as more words are parsed or generated. That is, the semantics of the complete dialogue context expressed as an RT is a superset (or supertype) of the semantics midway in that same conversation. This is the key mechanism used in the dialogue system below. Our linguistically informed approach in Chapter 3 will be using this key DS-TTR property as well. 2.5.2 The BABBLE Dialogue Model Incremental dialogue parsing described above combined with RL is the essence of the BABBLE model. Here, the parser acts as both NLU providing semantic dialogue state representation and word-by-word NLG by \u2018babbling\u2019 word sequences under the single shared grammar. RL, in turn, works as a trainable Dialogue Manager with word-by-word actions. \fChapter 2. Background and Motivation 47 The two main resources for BABBLE are: a) a DS-TTR parser DS \u2014 either learned from data (Eshghi et al., 2013a) or constructed by hand \u2014 for incremental language processing and more generally, for tracking the context of the dialogue using the model of feedback of Eshghi (2015), Eshghi et al. (2015, 2011); b) a set D of transcribed successful dialogues in the target domain. In order to induce a fully incremental dialogue system from D, the following steps are performed: 1. Automatically induce the MDP state space, S , and the dialogue goal, GD, from D; 2. Automatically de\ufb01ne the state encoding function F : C \u2192S , where s \u2208S is a binary state vector designed to extract from the current context of the dialogue, i.e. the semantic features observed in the example dialogues D; c \u2208C is a DS context, i.e. a pair of TTR Record Types: \u27e8cp, cg\u27e9where cp is the content of the current clause (pending) as it is being constructed but not necessarily fully grounded yet; cg is the content already jointly built and grounded by the interlocutors (following the Dialogue Gameboard model of Ginzburg, 2012). 3. De\ufb01ne the MDP action set as the DS lexicon L (i.e. actions are words); 4. De\ufb01ne the reward function R as reaching GD, while minimising dialogue length. The generated MDP is then solved using RL, with a standard Q-learning method: train a policy \u03c0 : S \u2192L, where L is the DS Lexicon, and S the state space induced using F. The system is trained in interaction with a semantic user simulation which is also automatically built from the dialogue data and described in the next section. The state encoding function F. As shown in Figure 2.25 the MDP state is a binary vector of size 2 \u00d7 |\u03a6|, i.e. twice the number of the RT features. The \ufb01rst half of the state vector contains the grounded features (i.e. agreed by the participants) \u03c6i, and the second half contains the current semantics built incrementally in the current dialogue utterance. Formally: s = \u27e8F1(cp), . . . , Fm(cp), F1(cg), . . . , Fm(cg)\u27e9 (2.34) where Fi(c) = 1 if c \u2291\u03c6i, and 0 otherwise. \fChapter 2. Background and Motivation 48 Grounded Semantics Current Turn Semantics Dialogue so far \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 x2 : e e2=like : es x1=US R : e p2=pres(e2) : t p5=sub j(e2,x1) : t p4=ob j(e2,x2) : t p11=phone(x2) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 x2 : e e2=like : es x1=US R : e p2=pres(e2) : t p5=subj(e2,x1) : t p4=obj(e2,x2) : t p11=phone(x2) : t x3 : e p10=by(x2,x3) : t p9=brand(x3) : t p10=question(x3) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb SYS: What would you like? USR: a phone SYS: by which brand? RT Feature: [ x10 : e p15=brand(x10) : t ][ e3=like : es p2=pres(e3) : t ]\uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 x10 : e x8 : e p14=by(x8,x10) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 e3=like : es x5=usr : e p7=sub j(e3,x5) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 x8 : e e3=like : es p6=ob j(e3,x8) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb F1 \u2193 F2 \u2193 F3 \u2193 F4 \u2193 F5 \u2193 State: \u27e8Current Turn: 1, 1, 1, 1, 1, \u27e9 Grounded: 0, 1, 0, 1, 1 Figure 2.25: Semantics to MDP state encoding F with RT features 2.5.3 Semantic User Simulation for BABBLE An RL-based model, BABBLE requires a user simulator for e\ufb03cient training. The simulator performs two key tasks during training: (1) generating user turns in the right dialogue contexts; and (2) word-by-word monitoring of the utterance so far generated by the system during exploration (i.e. babbling grammatical word sequences) by the system. Both tasks use the DS parser as well the state encoding function F described above. They are thus performed based on the semantic context of the dialogue so far, as tracked by DS . The simulator is sensitive to the order in which information is received from the user since its context includes both the semantic features of the current turn and the history of the conversation. The rules required for (1) & (2) are extracted automatically from the raw dialogue data D using DS and F. The dialogues in D are parsed and encoded using F incrementally. For (1), all the states that trigger the user into action, si = F(c) \u2014 where c is a DS context prior to any user turn \u2014 are recorded and mapped to what the user utters in those contexts. For more than one training dialogue there may be more than one candidate (in the same context/state). The rules extracted in this way will be of the form: strig \u2192{u1, . . . , un} where ui are user turns. The si\u2019s prior to the user turns also immediately follow system turns. Therefore, in order to monitor the system\u2019s behaviour during training, one needs to check that the current state upon a a word generated by the system subsumes (or is extendible to) one of the si. This is performed through bitmask operation. The simulation can therefore semantically identify erroneous actions \fChapter 2. Background and Motivation 49 (words) by the system. It would then terminate the learning episode and penalise the system immediately which considerably speeds up training. To recap, the BABBLE model described above involves incrementally parsing dialogues and encoding the resulting semantics as state vectors in an MDP, which is then used for RL of word-level actions for system output (i.e. a combined incremental DM and NLG module for the resulting dialogue system). 2.6 Generalisation Power and Robustness of Dialogue Models The key question in development machine learning models able to work with minimal amounts of training data is, how well they generalise to the data unseen during training. Speci\ufb01c to the dialogue systems we are going to work in this thesis, such novel properties of the data may be the details of the dialogue task, e.g. conversations coming from a di\ufb00erent domain or containing slot types/values unseen in the training domains (see e.g. Zhao et al., 2017; Henderson et al., 2014b). It can also be some intrinsic property of the dataset itself, e.g. the presence of spoken dis\ufb02uencies, out-of-domain (OOD) utterances, or just noise in the data \u2014 which can all be considered anomalous input. We then say that a dialogue system able to attain stable, consistently high performance across \u2018clean\u2019 data and that containing anomalous phenomena is robust to those. In this thesis, we are going to work with the following 2 types of robustness, categorised by the speci\ufb01c phenomena in the input data: \u2014 robustness to dis\ufb02uencies is concerned with the surface variations in the input utterances appearing due to the nature of spoken language (see the next section for a detailed problem description). A system robust to dis\ufb02uencies is expected to attain similar performance on \u2018clean\u2019 data with those not present (e.g. examples collected in a controlled user study) as well as on more real-world conversations containing those phenomena (see a more detailed discussion in Section 2.6.1). We will explore this problem in Chapters 3 and 6 \u2014 robustness to OOD input represents a similar system\u2019s quality, with the phenomena of interest being user\u2019s input turns not belonging to the system\u2019s designated domain, e.g. \u2018put on my evening playlist\u2019 queried to a restaurant search system. We address this problem in a more speci\ufb01c way than the previous one, and so expect an OOD-robust system to be able to (1) correctly identify anomalous inputs in the dialogues, and (2) attain a performance level on OOD-containing data similar to that on purely in-domain (IND) dialogues. That is, the system is supposed to produce the originally designated responses for the IND turns as well as the special \u2018fallback\u2019 response for OOD turns signalising that it encountered anomalous input, with a minimal accuracy trade-o\ufb00between the two (see an overview of the problem area in Section 2.6.2). We will explore this problem in Chapter 7. \fChapter 2. Background and Motivation 50 2.6.1 Spoken Dis\ufb02uencies and Data E\ufb03ciency Humans process (parse and generate) language incrementally word by word, rather than turn by turn or sentence by sentence (Howes et al., 2010; Pickering et al., 1999; Ferreira et al., 2004). This leads to many characteristic phenomena in spontaneous dialogue that are di\ufb03cult to capture in traditional linguistic approaches and are still largely ignored by dialogue system developers. These include various kinds of context-dependent fragments (Fern\u00e1ndez and Ginzburg, 2002; Fern\u00e1ndez, 2006; Kempson et al., 2017), false starts, suggested add-ons, barge-ins, and dis\ufb02uencies. In this thesis, we are interested in the following dis\ufb02uencies: pauses, hesitations, false starts, and self-corrections \u2014 that are common in natural spoken dialogue. These proceed according to a well-established general structure with three phases (Shriberg, 1994): with [Italian | {z } reparandum + {uh} |{z} interregnum Spanish] | {z } repair cuisine Speci\ufb01c dis\ufb02uency structures have been shown to serve di\ufb00erent purposes for both the speaker and the hearer (Brennan and Schober, 2001) \u2014 for example, a \ufb01lled pause such as \u2018uhm\u2019 can elicit a completion from the interlocutor, but also serve as a turn-holding device; mid-sentence self-corrections are utilised to deal with the speaker\u2019s own error as early as possible, thus minimising e\ufb00ort. In dialogue systems, the detection, processing, and integration of dis\ufb02uency structures is crucial to understanding the interlocutor\u2019s intended meaning (i.e. robust NLU), but also for coordinating the \ufb02ow of the interaction. Like dialogue processing in general, the detection and integration of dis\ufb02uencies needs to be strongly incremental: it needs to proceed word by word, enabling downstream processing to begin as early as possible, leading to more e\ufb03cient and more naturally interactive dialogue systems (Skantze and Hjalmarsson, 2010; Schlangen and Skantze, 2009). Furthermore, incremental dis\ufb02uency detection needs to proceed with minimal latency and commit to hypotheses as early as possible in order to avoid \u2018jittering\u2019 in the output and having to undo the downstream processes started based on erroneous hypotheses (Schlangen and Skantze, 2009; Hough and Purver, 2014b; Hough and Schlangen, 2015). While many current data-driven dialogue systems tend to be trained end-to-end on natural data, they do not normally take the existence of dis\ufb02uencies into account. The problem is that, taken together with the particular syntactic and semantic contexts in which they occur, dis\ufb02uencies are very sparsely distributed, which leads to a large mismatch between the training data and actual real-world spontaneous user input to a deployed system. This suggests a more modular, pipelined approach, where dis\ufb02uencies are detected and processed by a separate, domain-general \fChapter 2. Background and Motivation 51 module, and only then any resulting representations are passed on for downstream processing. The upshot of such a modular approach would be a major advantage in generality, robustness, and data-e\ufb03ciency. 2.6.1.1 Incremental Dis\ufb02uency Detection Models Work on dis\ufb02uency detection has a long history, going back to Charniak and Johnson (2001) who set the challenge. One of the important dividing lines through this work is the incrementality aspect, i.e. whether dis\ufb02uency structure is predicted word by word. In the non-incremental setting, as the problem is essentially sequence tagging, neural models have been widely used. As such, there are approaches using an encoder-decoder Seq2Seq model with attention (Wang et al., 2016) and a Stack-LSTM model working as a bu\ufb00er of a transitionbased parser (Wang et al., 2016; Wang et al., 2017b), the latter attaining superior results in the non-incremental setting. Incremental, online processing of dis\ufb02uencies is a more challenging task, if only because there is much less information available for tagging, i.e. only the context on the left. In a practical system, it also involves extra constraints and evaluation criteria such as minimal latency and revisions to past hypotheses which lead to \u2018jittering\u2019 in the output with all the dependent downstream processes having to be undone, thus impeding e\ufb03ciency (Hough and Purver, 2014b; Purver et al., 2018). Incremental dis\ufb02uency detection models include Hough and Purver (2014b) who approach the problem information-theoretically, using local surprisal/entropy measures and a pipeline of classi\ufb01ers for recognition of the various components of dis\ufb02uency structure. While the model is very e\ufb00ective, it leaves one desiring a simpler alternative. This was made possible after the overall success of RNN-based models, which Hough and Schlangen (2015) exploit. In Chapter 6, we will build on top of this model, as well as evaluate it further. Dis\ufb02uency detection was also addressed in a multitask fashion by Schlangen and Hough (2017) whose secondary task is utterance segmentation \u2014 they demonstrate that the two tasks interact and thus are better approached jointly. Language models have been extensively used for improving neural models\u2019 performance. For example, Peters et al. (2018) showed that a pre-trained language model improves RNN-based models\u2019 performance in a number of NLP tasks \u2014 either as the main feature representation for the downstream model, or as additional information in the form of a latent vector in the intermediate layers of complex models. The latter way was also employed by Peters et al. (2017) in the task of sequence labelling. \fChapter 2. Background and Motivation 52 Finally, a multitask setup with language modelling as the second objective \u2014 the closest to our approach in Chapter 6 \u2014 was used by Rei (2017) to improve the performance of RNNbased Name Entity Recognition. The LM part of their model predicts both surrounding words of an input word which is done using a bidirectional LSTM (the forward one predicts the next word, and the backward one predicts the previous word). In our task, we can not make use of a backward model as we work in the word-by-word incremental fashion. We note that there is no previous approach to multitask dis\ufb02uency detection using a secondary task as general and versatile as language modelling. Furthermore, none of the works mentioned study how well their models generalise across datasets of di\ufb00erent dialogue types, nor do they shed much light on what kinds of dis\ufb02uency structure are harder to detect, and why. 2.6.2 Out-of-Domain Robustness and Data E\ufb03ciency Data-driven approaches to dialogue systems development o\ufb00ered by the common bot building platforms (e.g. Google Dialog\ufb02ow, Amazon Alexa Skills Kit, Microsoft Bot Framework) make it possible for a wide range of users to easily create dialogue systems with a limited amount of data in their domain of interest (e.g. restaurant search, travel booking, city info). Most task-oriented dialogue systems are built for a closed set of target domains, and in the setting of a low amount of in-domain training data, this leads to over\ufb01tting of machine learning methods and unpredictable performance outside their training sets. For a closed-domain dialogue system, it is extremely important to maintain predictable behaviour, and any failure to detect OOD utterances and respond with an appropriate fallback action6 can lead to a frustrating user experience. In the setting of working with minimal training data, the latter is especially relevant since there is no access to \u2018real\u2019 OOD examples. There have been a set of prior approaches for OOD detection which require both in-domain (IND) and OOD data (Nakano et al., 2011; Tur et al., 2014). However, it is a formidable task to collect su\ufb03cient data to cover in theory an unbounded variety of OOD utterances. In contrast, Lane et al. (2007) introduced an in-domain veri\ufb01cation method that requires only IND utterances. Later, with the rise of deep neural networks, Ryu et al. (2017) proposed an autoencoder-based OOD detection method which surpasses prior approaches without access to OOD data. However, those approaches still have some restrictions such that there must be multiple sub-domains to learn utterance representation, and one must set a decision threshold for OOD detection. This can prohibit these methods from being used for most systems that focus on a single task. Moreover, recently, it was shown that density estimation models like autoencoders lack stability in telling between in-distribution and out-of-distribution data (Nalisnick et al., 2019) \u2014 and a standalone autoencoder does not su\ufb03ce for our task, as we are going to 6See an example of the Alexa Skills Kit\u2019s built-in fallback action on Amazon Developer Blogs. \fChapter 2. Background and Motivation 53 demonstrate empirically in Chapter 7. There, we will focus on studying the e\ufb00ect of OOD input on goal-oriented dialogue models\u2019 performance and propose a simple and e\ufb03cient solution for improving their robustness only using IND data. 2.7 Dialogue Datasets and Data Collection Deep learning methods described above perform at their best when provided with large amount of training data. In case of the NLP \ufb01eld in general, the main sources of general-purpose data are large-scale web resources: Wikipedia, online news resources, and posts on social networks (e.g. Reddit, Twitter). For dialogue, di\ufb00erent kinds of datasets are used given the system type. Chatoriented systems aimed at eliciting human-like open-domain conversation can be trained from large conversational (or conversation-like, e.g. comment threads on message boards) corpora. As such, the following datasets were used for training Seq2Seq conversation models (a wider review can be found in Serban et al., 2018): \u2014 Cornell Movie Dialogs Corpus (Danescu-Niculescu-Mizil and Lee, 2011) \u2014 over 300,000 total utterances, \u2014 OpenSubtitles (Lison and Tiedemann, 2016) \u2014 400 million subtitle lines, \u2014 Reddit conversations (Baumgartner et al., 2020) \u2014 over 3.7 billion comments, \u2014 Twitter (Sordoni et al., 2015) \u2014 29 million \u2018context-message-response\u2019 triples7. The intuition behind using movie subtitle corpora is that movie or TV series dialogues contain everyday conversations, and given enough coverage, it\u2019s theoretically possible to obtain opendomain chatting behaviour by mimic situations from the movies as well as learn to generalise over them to a certain degree. In goal-oriented dialogue, the datasets used are more domain-speci\ufb01c. Some of the most widelyknown are: \u2014 \u2018Let\u2019s Go\u2019 (DSTC1, Williams et al., 2013) \u201415,000 dialogues in the bus information domain, \u2014 Cambridge restaurants dataset (DSTC 2\u20143, Williams et al., 2016) with 3,000 dialogues in the domain of restaurant search, 7Larger datasets can be obtained from the Twitter Stream archives \fChapter 2. Background and Motivation 54 Figure 2.26: ParlAI web interface for Wizard-of-Oz data collection (Miller et al., 2017) \u2014 Stanford Multi-Domain (SMD) dialogue dataset (Eric et al., 2017) with 3,000 dialogues in 3 goal-oriented domains, namely in-car navigation, weather information, and appointment scheduling, \u2014 MultiWOZ (Budzianowski et al., 2018, Eric et al., 2019, Zang et al., 2020) \u2014 a multidomain, multi-task goal-oriented dataset with 10,000 dialogues. Domains represented in MultiWOZ are restaurant, hotel, taxi, police, attraction, train, and hospital, \u2014 Frames (Asri et al., 2017) with 1369 dialogues in the travel information domain addressing complex user\u2019s goals and more advanced real-world scenarios beyond linear form-\ufb01lling. \u2014 MetaLWOz \u2014 the dataset collected for DSTC-8 Track 2 \u201cFast Domain Adaptation\u201d (Lee et al., 2019a). with more than 37,000 human-human dialogues spanning the total of 227 tasks in 47 domains. The dialogues are collected in a way that human participants were assigned the role of bot or user, then given a problem domain and related speci\ufb01c task, and instructed to reach the user\u2019s goal over at least 10 dialogue turns. Apart from the openly available testbeds, the data for domain-speci\ufb01c dialogue scenarios is normally collected via a technique called Wizard-of-Oz (WOz, Dahlb\u00e4ck et al., 1998), where two humans interact with each other, one acting as the user and the other one simulating the behaviour of the potential dialogue system. Historically, this approach was used to conduct user experience studies, although in case of machine learning-based dialogue systems it is used as \fChapter 2. Background and Motivation 55 the seed data for training the prototype system (also referred to as bootstrapping) for further \ufb01ne-tuning from real interactions. With training data being the principal asset in modern dialogue system development, it has become of key importance to incorporate data collection into the development pipeline in a principled way. Speci\ufb01cally, crowdsourcing platforms like Amazon Mechanical Turk (AMT)8 and Figure Eight9 gained wide adoption for collecting real-user data. Correspondingly, dialogue system frameworks and solutions introduced recently were designed with AMT integration in mind. For example, Rojas-Barahona et al. (2017) introduce their end-to-end trainable approach along with the WOz framework for data collection on AMT. Moreover, the ParlAI conversational platform (Miller et al., 2017) provides seamless AMT integration for WOz data collection as one of its key features \u2014 the web interface for WOz interactions is shown in Figure 2.26. We observe that one of the key directions in conversational systems research is providing means to collect datasets of moderate amount in a principled way for rapid prototyping or bootstrapping dialogue systems. Still, the less are data needed for the system to perform reasonably well, the more \ufb02exible the system gets for use outside the academic testbeds. Therefore, advancing the training techniques for less dependence on data is of a high priority in dialogue systems research. In the next chapter, we are going to start our study on dialogue data e\ufb03ciency by comparing two fundamentally di\ufb00erent approaches to dialogue: linguistically informed models based on dialogue grammars and neural response retrieval models (discussed in Sections 2.5 and 2.2.1 of this chapter, respectively). 8https://www.mturk.com 9https://www.figure-eight.com \fChapter 3 Linguistic Knowledge or Learning from Examples: A Data E\ufb03ciency Perspective We are going to start our research with an experimental study of dialogue models\u2019 generalisation power and robustness in the low-resource setup, i.e. in the task of bootstrapping a dialogue system from seed data. We perform the experiments in a controlled environment using the bAbI Dialog Tasks dataset (Bordes et al., 2017) and focus our attention on two fundamentally di\ufb00erent types of models: a neural retrieval-based model MemN2N (Sukhbaatar et al. (2015), discussed in the previous chapter, and a linguistically informed model based on a semantic parser/generator DyLan. We look at their performance in the limited data setup with bAbI as well as their generalisation potential to more diverse and challenging input \u2014 for that, we introduce bAbI+1, an augmented version of the bAbI dataset with increased surface complexity represented by simulated spoken dis\ufb02uencies. 3.1 Motivation Every practical machine learning model represents a trade-o\ufb00between what is learned from data and what is given in the form of inductive biases. The range of possible inductive biases spans from the fundamental ones, e.g. speci\ufb01c network architectures tailored to di\ufb00erent types of input data (CNNs for image input, RNNs for sequential input) to task-speci\ufb01c objective functions that those networks minimise. In dialogue, as well as in NLP in general, the main sources of inductive biases are linguistic resources, e.g. gazetteers, ontologies, thesauri, and knowledge 1Available at https://bit.ly/babi_plus 56 \fChapter 3. Linguistic Knowledge or Learning from Examples 57 bases. Now, in the setting of minimal training data, the extent of what can be learned from it is limited very severely, so those linguistic resources and the corresponding inductive biases become the main way to obtain a generalisable model. In the dialogue systems area, there are currently several key problems for the practical datadriven development of task-oriented systems, among them: (1) large amounts of dialogue data are needed, i.e. thousands of examples in a domain; (2) this data is usually required to be annotated with task-speci\ufb01c semantic information for the domain (e.g. various dialogue act schemes); and (3) the resulting systems are usually trained from the data that do not properly represent many characteristic phenomena of dialogue such as spoken dis\ufb02uencies. In overcoming issue (2), a recent advance in research on chat-oriented dialogue was the development of end-to-end systems, in which all components are trained from textual dialogue examples, e.g. Sordoni et al. (2015), Vinyals and Le (2015). However, as Bordes et al. (2017) argued, these end-to-end methods may not transfer well to task-based settings (where the user is trying to achieve a domain goal, such as booking a \ufb02ight or \ufb01nding a restaurant, resulting in an API call). They then presented an end-to-end method using Memory Networks (MemN2Ns), which achieves 100% performance on a testset of 1000 dialogues, after being trained on 1000 training dialogues. This method processes dialogues turn-by-turn, and so does not have the advantages of more natural incremental systems (Aist et al., 2007; Skantze and Hjalmarsson, 2010); nor does it really perform language generation, rather it is based on a retrieval model that selects from a set of candidate system responses seen in the data. In this chapter, we will investigate two fundamentally di\ufb00erent approaches: (1) MemN2N, a neural retrieval-based model of Sukhbaatar et al. (2015), and (2) a linguistically informed model that uses an incremental semantic parser/generator for dialogue based around the Dynamic Syntax grammar formalism \u2014 described in Sections 2.2.1 and 2.5.2, respectively. Speci\ufb01cally, we explore how well these methods overcome the requirement for large amounts (i.e. thousands of dialogues in a domain) of annotated dialogue data by putting them in a setup with up to 5 example dialogues from bAbI, Task 1. Then, in order to evaluate the systems\u2019 robustness to the unseen but highly likely variations in the user\u2019s input, we also introduce an extended, incremental version of the bAbI dataset, which we call bAbI+ (see Section 3.3): it adds some characteristic phenomena of spoken language \u2014 such as mid-utterance self-corrections \u2014 to the bAbI dialogues. Using this, we further experiment with the two systems and see how they are able process this more challenging data. \fChapter 3. Linguistic Knowledge or Learning from Examples 58 Figure 3.1: bAbI Dialog Tasks structure \u2014 shown are the synthetic Task1\u2014Task 5 (Bordes et al., 2017) 3.2 Experimental Framework \u2014 bAbI Dialog Tasks In this chapter, our focus is not on building dialogue systems, but on: (1) studying and quantifying the interactional and structural generalisation power of the DS-TTR grammar formalism and that of symbolic, grammar-based approaches to language processing more generally. We focus here on speci\ufb01c dialogue phenomena, such as mid-sentence self-corrections, hesitations, and restarts (see below); (2) doing the same for Bordes et al.\u2019s response retrieval model MemN2N, without the use of linguistic knowledge of any form; and (3) comparing (1) and (2). In order to test and quantify the interactional and structural generalisation power of the two models, we need contrasting dialogue datasets that control for interactional vs lexical/syntactic variations in the input dialogues. Furthermore, to make our results comparable to the existing approach of Bordes et al. (2017), we need to use the same dataset that they have used. We therefore use Facebook AI Research\u2019s bAbI Dialog Tasks dataset (Bordes et al., 2017). These are goal-oriented dialogues in the domain of restaurant search. In the dataset, there are 6 tasks of \fChapter 3. Linguistic Knowledge or Learning from Examples 59 increasing complexity ranging from only collecting the user\u2019s preferences on restaurant and up to conducting full dialogues with changes in the user\u2019s goal and providing extra information upon request \u2014 see Figure 3.1 for an illustration. The \ufb01rst 5 tasks are \u2018clean\u2019 dialogues composed synthetically and they thus lack the features of natural everyday conversations. Task 6 (not shown in the \ufb01gure) is the natural counterpart of the Task 5, containing dialogues with human users from the Dialog State Tracking Challenge 2. After the original Bordes et al.\u2019s result on Task 1, several studies have shown di\ufb00erent ways in which MemN2Ns are outperformed: Eric and Manning (2017) introduced the Copy-Augmented Sequence-to-Sequence model that outperforms the MemN2N on Task 6; Williams et al. (2017) presented Hybrid Code Networks (discussed in the previous chapter), a combined RNN/rulebased model trainable in a 2-stage supervised + reinforcement learning setup, outperforming the MemN2N on Tasks 5 and 6. However, none of these studies control for the type of complexity that might result in worse performance, and thus do not shed any light on why a particular architecture such as MemN2N might be at a disadvantage. While Task 5 dialogues have the full task complexity, conducting full dialogues with an un\ufb01xed user goal and additional information requests, they are still composed programmatically and contain minimal surface variation. The Task 6 dialogues on the other hand are complex both in terms of the surface variation and the task itself. In order to study the speci\ufb01c e\ufb00ects of incremental variations in dialogue such as conversational dis\ufb02uencies, we focus on Task 1, where in each dialogue the system asks the user about their preferences for the properties of a restaurant, and each dialogue results in an API call containing values of each slot obtained (e.g. food-type=french) \u2014 the ability of predicting the API calls correctly thus provides a direct measure of how well a particular model can interpret the dialogues. We would like to point out that we will be using the synthetic part of the bAbI Dialog Tasks dataset as a controlled experimental environment, and in the next section, we are going to present our modi\ufb01cations that we apply to this dataset in a programmatic way \u2014 similar to how the initial corpus was created \u2014 in order to simulate certain linguistic phenomena of interest. 3.3 The bAbI+ Dataset The original bAbI dialogues were synthesised in a way that their main source of complexity is the dialogue goal itself, with its challengingness increasing from Task 1 to Task 5. In addition, they also contain some basic lexical/syntactic variation, e.g. \u201cmay i have a table with cuisine in a price range in ?\u201d, \u201ccan you make a restaurant reservation with cuisine in a price range in ?\u201d However, Task 1 dialogues signi\ufb01cantly lack any simulation of incremental and interactional variations vital for real-life \fChapter 3. Linguistic Knowledge or Learning from Examples 60 dialogues. In order to obtain such variation while keeping the controllable environment close to the laboratory conditions that bAbI o\ufb00ers, we created the bAbI+ dataset by systematically transforming the original dataset\u2019s dialogues. bAbI+ is an extension of the bAbI Task 1 dialogues with dis\ufb02uent dialogue phenomena (hesitations, restarts, and corrections \u2014 see below). This extension can be seen as orthogonal to the increasing task complexity which Tasks 2\u20145 o\ufb00er: we instead increase the complexity of surface forms of dialogue utterances, while keeping every other aspect of the task \ufb01xed. Our modi\ufb01cations model the dis\ufb02uencies and communication problems in everyday spoken interaction in real-world environments. These variations are: \u2014 Hesitations, e.g. as in \u201cwe will be uhm eight\u201d; \u2014 Restarts, e.g. \u201ccan you make a restaurant uhm yeah can you make a restaurant reservation for four people with french cuisine in a moderate price range\u201d; \u2014 Corrections a\ufb00ecting task-speci\ufb01c information \u2013 both short-distance ones correcting one token, e.g. \u201cwith french oh no spanish food\u201d, and long-distance NP/PP-level corrections, e.g. \u201cwith french food uhm sorry with spanish food\u201d, all within a single user utterance, rather than across multiple turns. The phenomena above are mixed in probabilistically (with the aim to re\ufb02ect the statistics from Hough, 2014) from the \ufb01xed sets of templates to the original data2. The modi\ufb01cations a\ufb00ect a total of 11,336 utterances in the 3998 dialogues. Around 21% of user turns contain corrections, 40% hesitations, and 5% restarts (they are not mutually exclusive, so that an utterance can contain up to 3 modi\ufb01cations). Our modi\ufb01cations, with respect to corrections in particular, are more conservative than those observed in real-world data: Hough (2014) reports that selfcorrections appear in 20% of all turns natural conversations from British National Corpus, and in 40% of turns in the Map Task, a corpus of human-human goal-oriented dialogues. Here is part of an example dialogue in the bAbI+ corpus showing some of the augmentations ( hesitations and corrections ) in the user\u2019s turns: 2See https://github.com/ishalyminov/babi_tools \fChapter 3. Linguistic Knowledge or Learning from Examples 61 sys: hello what can I help you with today? usr: I\u2019d like to book a uhm yeah I\u2019d like to book a table in a expensive price range no sorry in a cheap price range sys: I\u2019m on it. Any preference on a type of cuisine? usr: with indian food no sorry with spanish food sys: where should it be? usr: in tokyo sys: how many people would be in your party? usr: for uhm yeah for eight people please sys: ok let me look into some options for you 3.4 Models 3.4.1 MemN2N We follow Bordes et al.\u2019s setup by using a MemN2N (we took an open source Tensor\ufb02ow implementation for bAbI QA tasks and modi\ufb01ed it3 according to their setup \u2014 see details below). For the description of the model architecture, please see Chapter 2. We\u2019re initially interested in the single-hop con\ufb01guration with which Bordes et al. achieve perfect accuracy on bAbI Task 1. In order to adapt the data for MemN2N, we transform the dialogues into triplets. The number of triplets for a single dialogue is equal to the number of the system\u2019s turns, and in each triplet, the answer is the current system\u2019s turn, the question is the user\u2019s turn preceding it, and the story is a list of all the previous turns from both sides. Other than that, each sentence in the story gets 2 additional tokens: the number of the turn, and the ID of the speaker (Bordes et al., 2017). We also use the single embedding matrix A for both input memories and the user\u2019s question; the same matrix is used for the output memories representation \u2014 in that we follow Bordes et al. (2017), and it corresponds to the \u201cAdjacent\u201d weight tying model in Sukhbaatar et al. (2015). In our setup, there are no out-of-vocabulary words for the model during both training and testing, and for both bAbI and bAbI+ with the maximum sentence length taking account of the increase due to the transformations in bAbI+. We train our MemN2Ns with an SGD optimiser for 100 epochs with a learning rate of 0.01 and a batch size of 8 \u2014 in this we again follow the con\ufb01guration reported by Bordes et al. (2017) to be the best for bAbI Task 1. 3See https://github.com/ishalyminov/memn2n \fChapter 3. Linguistic Knowledge or Learning from Examples 62 3.4.2 DyLan: bAbI and bAbI+ Setup Details Although DyLan\u2019s Dynamic Syntax grammar is learnable from data, the existing learned models in the prior work (Eshghi et al., 2013a; Eshghi et al., 2013b), were induced from a corpus of child-directed utterances, and there were some constructions as well as individual words that the resulting lexicons did not include. We therefore extended this induced grammar manually to cover the bAbI dataset, which, despite not being very diverse, contains a wide range of complex grammatical constructions, such as long sequences of prepositional phrases, adjuncts, short answers to yes/no and wh-questions, appositions of NPs, causative verbs etc \u2014 and all of this within and across dialogue turns/speakers. Using DyLan, we parsed all dialogues in the bAbI train and test sets, as well as on the bAbI+ corpus word-by-word, including both user and system utterances, in context. The grammar parses 100% of the dialogues, i.e. it does not fail on any word in any of the dialogues. Our aim here is to assess the ability of a DyLan-based system to generalise from small data and compare this to the results of the MemN2N in Bordes et al. (2017). The latter is based on the retrieval of system responses given the dialogue history up to that point. Therefore, for direct comparison, and for simplicity of exposition, we set up an experimental testbed extending the semantic parser in the following way: we employ the logic originally presented as the BABBLE user simulation (Section 2.5.3), this time for the system side, resulting in a \u2018system simulation\u2019. We then use this to predict a system response, by parsing and encoding the containing test dialogue up to the point immediately prior to the system turn. This results in a triggering state strig, which is then used as the key to look up the system\u2019s response from the rules constructed as per Section 2.5.3. The returned response is then parsed word-by-word as normal, and this same process continues for the rest of the dialogue. This method uses the full machinery of DS-TTR and our state-encoding method and will thus re\ufb02ect the generalisation properties that we are interested in. Our overall method described in Section 2.5.3 respects the turn ordering encountered in the data, or more generally the order in which semantic increments are added to context. This is because states are composed not only of the semantic features of the current turn, but also those of the conversation history. And thus they capture the contextual boundary at which a user turn is being generated or a system turn monitored (e.g. in the bAbI \u2018restaurant search\u2019 domain, a state might capture the fact that the user has already provided the cuisine type and the location of the restaurant). Unlike many other approaches to goal-oriented dialogue, DyLan-based approach is not based on dialogue acts, and is word-by-word incremental. This means that there is no given/prior de\ufb01nition of what sequences of words or semantic updates constitute a dialogue turn: the system needs to learn this (or, in general, automatically construct from the data), or else it would just go \fChapter 3. Linguistic Knowledge or Learning from Examples 63 Figure 3.2: Few-shot performance of DyLan and MemN2N on generating (i.e. exploring grammatical word outputs) without ever stopping. We prevent this by having the simulator interrupt the system at the semantic turn and clause boundaries that are encountered in the data; and thus the behaviour of the simulator determines the system\u2019s turn boundaries. 3.5 Experiments 3.5.1 Experiment 1: Generalisation from Small Data We have now set out all we need to perform the \ufb01rst experiment. Since we are here interested in both (1) data e\ufb03ciency and (2) robustness, we use all the bAbI and bAbI+ data \u2014 the train, dev, and test sets \u2014 in the cross-validation setup as follows: we train the MemN2N as well as construct DyLan\u2019s semantic context-response mapping from 1\u20145 examples selected at random from the longest dialogues in bAbI (note bAbI+ data is never used for training in this experiment). This process is repeated across 10 folds. The models are then tested on sets of 1000 examples selected at random, in each fold. Both the training and test sets constructed in this way are kept constant in each fold across DyLan and MemN2N. The test sets are selected either exclusively from bAbI or exclusively from bAbI+. \fChapter 3. Linguistic Knowledge or Learning from Examples 64 3.5.1.1 Results: Predicting System Turns Figure 3.2 shows per-utterance accuracies for the DyLan and MemN2N models. Per-utterance accuracy is the percentage of all system turns in the test dialogues that were correctly predicted. The table shows that DyLan can generalise to 74% of bAbI and 65% of bAbI+ with only 5 input dialogues from bAbI. It also shows that MemN2Ns can also generalise remarkably well. Although as discussed below, this result is misleading on its own as the MemN2Ns are very poor at generating the \ufb01nal API calls correctly on both the bAbI and bAbI+ data, and are thus making too many semantic mistakes. 3.5.2 Experiment 2: Semantic Accuracy The results from Experiment 1 on their own can be misleading, as correct prediction of system responses does not in general tell us enough about how well the models are interpreting the dialogues, or whether they are doing this with a su\ufb03cient level of granularity. To assess this, in this second experiment, we measure the semantic accuracy of each model by looking exclusively at how accurately they predict the \ufb01nal API calls in the bAbI and bAbI+ datasets. For the MemN2N model, we follow the same overall procedure as in the previous experiment: train on bAbI data, and test on bAbI+. 3.5.2.1 Results: Prediction of API Calls DyLan results. Successful parsing of all the dialogues in the bAbI and bAbI+ datasets as shown above does not mean that the semantic representations compiled for the dialogues were in fact correct. To measure the semantic accuracy of the DS-TTR parser DyLan we programmatically checked that the correct slot values \u2014 those in the API call annotations \u2014 were in fact present in the semantic representations produced by the parser for each dialogue (see Fig. 2.23 for example semantic representations). We further checked that there is no other incorrect slot value present in these representations. The results showed that the parser has 100% semantic accuracy on both bAbI and bAbI+. This result is not surprising, given that DS-TTR is a general model of incremental language processing, including phenomena such as self-corrections and restarts (see Hough, 2014 for details of the model). MemN2N results \u2014 small data setup. Given just 1 to 5 training instances from bAbI as in the previous experiment, the mean API call prediction accuracy of the MemN2N model is nearly 0 on both bAbI and bAbI+. This is not at all unexpected, since we see prediction of the API calls as an inherently generation process, unlike the prediction of system turns which can be done on a retrieval/look-up basis alone. For this, the model needs to observe the di\ufb00erent word sequences \fChapter 3. Linguistic Knowledge or Learning from Examples 65 Train / test set con\ufb01guration Train accuracy Test accuracy bAbI/ bAbI 100 100 bAbI/ bAbI+ 100 28 bAbI+/ bAbI 67 99 bAbI+/ bAbI+ 72 53 Table 3.1: API call accuracy (%) of the MemN2N trained on the full dataset that might determine each parameter (slot) value, and observe them with su\ufb03cient frequency and variation. This is unlike a semantic parser like DS-TTR, that produces semantic representations for the dialogues as a result of the structural, linguistic knowledge that it embodies. MemN2N results \u2014 full data setup. Nevertheless, we were also interested in the general semantic robustness of the MemN2N model to the transformations in bAbI+, i.e. how well does the MemN2N model interpret bAbI+ dialogues, when trained on the full bAbI dataset? Does it then learn to generalise to (process) the bAbI+ dialogues with su\ufb03cient semantic accuracy? Our hypotheses are that (i) given the positional encoding of memory vectors in the MemN2N model and the underlying attention mechanism, it would be able to learn to process bAbI+ dialogues given that it was trained on similar data, resulting in an insigni\ufb01cant drop in performance from bAbI to bAbI+ data; (ii) a lot more data would be needed to learn to process the bAbI+ structures; and (iii) if trained on bAbI data, there would be a very signi\ufb01cant drop in performance on bAbI+ with incorrect API calls predicted as a result of incorrect weightings and total lack of opportunity to learn the meaning of words such as \u201cno\u201d or \u201csorry\u201d which trigger the self-corrections and restarts. Table 3.1 shows that we can fully replicate the results reported in Bordes et al. (2017): the MemN2N model can predict the API calls with 100% accuracy, when trained on the bAbI trainset and tested on the bAbI testset. But when this same model is tested on bAbI+, the accuracy drops majorly to 28%, making any dialogue system built using this model unusable in the face of more diverse dialogue data \u2014 thus con\ufb01rming our hypothesis (iii). This is further discussed below. 3.5.2.2 How Much Data Is Enough Data? Given the results obtained so far, we are next interested in: (1) how robust MemN2Ns are to the surface transformations in bAbI+ when trained on bAbI; (2) can MemN2Ns learn to interpret bAbI+ when they are in fact trained on similar data that actually contain the bAbI+ structures \u2014 i.e. when trained on bAbI+; and (3) if so, how much bAbI+ data is needed for this. While (1) is a question about generalisation properties of a model, (2) & (3) are about potential in principle and/or practical limitations of MemN2Ns to learn to interpret dialogues containing, e.g. \fChapter 3. Linguistic Knowledge or Learning from Examples 66 Training bAbI+ dialogues Memory hops Embedding size Train acc. Test acc. 2000 2 128 72.5 57.5 5000 2 128 72.7 60.7 10.000 2 128 72.8 65.8 50.000 1 128 82.6 78.2 100.000 1 64 83.3 80.5 Table 3.2: MemN2N API call accuracy (%) with extended training data self-corrections where utterances contain both the correct, and an incorrect (and subsequently repaired) slot value (e.g. \u201cfor four sorry \ufb01ve people\u201d). To answer (1) we therefore train the model on the bAbI dataset and test on bAbI+; and to answer (2) & (3), we train the model on the bAbI+ train set and test it on the bAbI+ test set. Furthermore, in order to explore the impact of the amount of training data on the model\u2019s performance, we perform the latter experiment with varying train set size, as well as varying the hyperparameters: embedding size and the number of hops. The extended training data is obtained in the same way as the initial bAbI+ dataset: we go over the same original bAbI dialogues and keep randomly mixing in the incremental modi\ufb01cations. Table 3.2 shows how MemN2N performs on the same initial, \ufb01xed bAbI+ test set, when trained on progressively more data and up to 100,000 bAbI+ dialogues. As MemN2N\u2019s performance on bigger data highly depends on the model\u2019s hyperparameters, in this experiment we perform a grid search over the number of memory hops (1, 2, 3), and the embeddings dimensionality (32, 64, 128) for each train set size independently \u2014 everything else is \ufb01xed as in the previous experiment. Only the best performing hyperparameter con\ufb01guration for each of the train set sizes are included in the table. The results con\ufb01rm hypothesis (ii) above, i.e. that MemN2Ns are in principle able to learn to process the incremental dialogue phenomena in bAbI+ but that they require tens of thousands of training instances for this: even with 100,000 dialogues, the semantic accuracy on the original test set stands at 80.5%. 3.6 Discussion 3.6.1 MemN2N Analysis The MemN2N model was able to predict system responses remarkably well, even when trained on very few training instances. But results from Experiment 2 above showed that this was misleading: the MemN2Ns were making a drastic number of semantic mistakes when interpreting the dialogues, both in the bAbI and bAbI+ datasets. Even when trained on the full bAbI dataset, \fChapter 3. Linguistic Knowledge or Learning from Examples 67 the model performed badly on bAbI+ in terms of semantic accuracy. We diagnose these results as follows: Problem complexity. The \ufb01rst thing to notice is that in bAbI dialogue Task 1, the responses are highly predictable and stay constant regardless of the actual task details (slot values) up to the point of the \ufb01nal API calls; and further, that the prediction of API calls is a generation process, unlike the prediction of the system turns, which is retrieval-based. This, in our view, explains the very large di\ufb00erence in MemN2N performance across the two prediction tasks. Model robustness to the bAbI+ transformations. The variations introduced in bAbI+ are repetitions of both content and non-content words, as well as of additional incorrect slot values. The model was working in the same setup as DyLan, therefore none of these variations could be treated as unknown tokens for either system. Although in the case of MemN2N, some of the mixed-in words never appeared in the training data, and bAbI+ utterances were augmented signi\ufb01cantly with those words \u2014 so it was interesting to see how such untrained embeddings would a\ufb00ect the latent memory representations inside MemN2N. The resulting performance suggests that there was no signi\ufb01cant impact on MemN2N from these variations as far as the predicting system responses was concerned. But the incorrect slot values introduced in selfcorrections a\ufb00ect the system\u2019s task completion performance signi\ufb01cantly, only appearing at the point of API call predictions. It is worth noting that when trained on extended amounts on bAbI+ data with su\ufb03cient representation of target speech phenomena, MemN2N compensates for the most of the initial performance drop. However, it attains a reasonable level of accuracy (> 80%) when trained on 100,000 bAbI+ dialogues which renders it potentially impractical for real-world tasks with more complex data distributions. 3.6.2 DyLan Analysis The linguistically informed DyLan-based model we used in this chapter has the following conceptual advantages over previous approaches to dialogue system development: \u2014 word-by-word incremental (and thus more natural) language understanding, dialogue management, and generation4; \u2014 a complete dialogue system for a new task can be automatically induced, using only \u2018raw\u2019 data \u2014 i.e. successful dialogue transcripts; \u2014 wide-coverage, task-based dialogue systems can be built from much smaller amounts of data as shown in Section 3.5. 4Applicable to the full-\ufb02edged BABBLE setup as described in Section 2.5.2. \fChapter 3. Linguistic Knowledge or Learning from Examples 68 This \ufb01nal point bears further examination. Since it is an empirically adequate model of incremental language processing in dialogue, the DS-TTR grammar is required to capture interactional variants such as question-answer pairs, overand under-answering, selfand othercorrections, clari\ufb01cation, split-utterances, and ellipsis more generally. As we showed in Section 3.5, even if most of these structures are not present in the seed example(s), the \ufb01nal system is able to handle them, thus resulting in a very signi\ufb01cant generalisation around the original data. It can be said that the DyLan setup is a carefully tuned rule-based system, thus perhaps rendering these results trivial. But we note that the results here are not due to ad-hoc constructions of rules/lexicons, but due to the generality of the grammar model, and its attendant incremental, left-to-right properties; and that the same parser can be used in other domains. Furthermore, the ability to process self-corrections, restarts, etc. \u201ccomes for free\u201d, without the need to add or posit new machinery. The generalisation results we report above for DyLan follow entirely from the knowledge present within the grammar as a computational model of dialogue processing and contextual update, rather than this having been learned from data. Applying the full RL method of BABBLE (Section 2.5.2) would have meant that the system would actually discover many interactional and syntactic variations that are not present in bAbI, nor in bAbI+. 3.7 Conclusions In this chapter, we have evaluated the generalisation properties of a purely linguistically informed model based on the dialogue semantic parser DyLan, and compared it to a neural response retrieval model MemN2N. We did so by putting them in a controllable environment of bAbI Dialog Tasks and performing a series of experiments assessing their generalisation potential to more interactional variations in the input data (i.e., generalisation from seed examples to the full dataset), as well as the models\u2019 robustness to the simulated spoken language phenomena, i.e. self-corrections, hesitations, and restarts \u2014 represented in our bAbI+ corpus. Our experiments show that MemN2N lacks the ability to generalise to such phenomena, and performs poorly when confronted with such variations even within synthetic, programmatically generated dialogue data. Our experiments further show that although this particular model is in principle able to learn to process dis\ufb02uent dialogue phenomena, it requires an impractically large amount of data to do so. The results in this chapter therefore shed signi\ufb01cant light on the currently ambiguous accuracy results reported for end-to-end systems (Bordes et al., 2017). On the other hand, experiments with DyLan show that it can process 74% of the bAbI Dialog Task 1 even when only exposed to 0.13% of the data (5 dialogues); it can in addition process 65% \fChapter 3. Linguistic Knowledge or Learning from Examples 69 of bAbI+. That is in contrast to MemN2N which was not robust to the structures we introduced in bAbI+, even when trained on the full bAbI dataset. The above results give us the following insights on the two possible ways towards attaining a practical level of data e\ufb03ciency in dialogue. Firstly, an inductive bias in the form of a linguistically informed model can be an e\ufb03cient approach in an extreme case of 1-shot/few-shot generalisation. However, the bottleneck here is the linguistic resource itself, the incremental dialogue grammar of DyLan in our case. Parsing natural speech is generally challenging, especially in case of freeform conversation, so producing wide-coverage grammars for this purpose can be a notoriously hard task. The second path featuring neural models has a de\ufb01nite advantage here from the Natural Language perspective: given a su\ufb03cient amount of data to represent the phenomena of interest, models like MemN2N are able to adjust and process it adequately from the target task\u2019s perspective. Scaling to an unknown domain or a foreign language for that matter will require nothing more than a su\ufb03ciently representative dataset. The challenge here though is how much data is considered su\ufb03cient for that, and our controlled experiments with MemN2N show that its actual data requirements are quite far from practical. Therefore, the vital step towards real-world applicability of such systems is to reduce the amount of required training data and annotations closer to what can be considered practically data-e\ufb03cient without the loss of the models\u2019 generalisation potential. In the next chapter, we are going to explore the purely data-driven approach to dialogue system bootstrapping. That is, we will focus on learning transferable representations of dialogue capturing lexical and interactional similarity (as discussed in Section 2.4) and their applicability for reducing the data consumption of goal-oriented dialogue systems. \fChapter 4 Learning Transferable Dialogue Representations As we saw in the previous chapter, both linguistically informed and purely data-driven methods have their potential in terms of data e\ufb03ciency. The principal advantage of machine learning methods is that they generally do not require constructing and maintaining syntactic/semantic grammars which are very challenging to build, especially in the setting of spontaneous spoken language. The problem with machine learning methods though is, as we also saw previously, their performance depends to a high degree on the training data, and the overall data consumption of those models signi\ufb01cantly reduces their \ufb02exibility in practical setups. A possible solution to that is to use large sources of domain-independent data in order to obtain the common language and dialogue representation and then transfer it to the speci\ufb01c problem domain, with minimal \ufb01ne-tuning to the available in-domain data. As outlined in Chapter 2, transfer learning has already proved to be a very e\ufb03cient technique in computer vision and is being actively adopted in Natural Language Processing \u2014 in this chapter, we are going to explore its applicability to dialogue. Speci\ufb01cally, we are going to present the Dialogue Knowledge Transfer Network (or DiKTNet), a goal-oriented dialogue response generation model designed for few-shot learning, i.e. training only using a small number of complete in-domain dialogues. The key underlying concept of this model is transfer learning: DiKTNet makes use of the latent text representation learned from several sources ranging from large-scale general-purpose textual corpora to similar dialogues in the domains di\ufb00erent to the target one. We use the evaluation framework of Zhao and Esk\u00e9nazi (2018) and mainly compare our approach to theirs \u2014 and, similarly to them, we use the Stanford Multi-Domain dialogues dataset (Eric et al., 2017). While Zhao and Esk\u00e9nazi\u2019s method does not require complete in-domain dialogues and uses annotated utterances instead (and is therefore presented as \u201czero-shot\"), we show that our model achieves superior performance with roughly 70 \fChapter 4. Learning Dialogue Representations 71 Driver: I need to \ufb01nd out the time and parties attending my optometrist appointment Car: I have 3 appointments scheduled, with Alex, your sister, and Je\ufb00. Which are you referring to? Driver: I want to know about the one that Alex is joining me at Car: That optometrist appointment is at 4 pm. Driver: Thanks Car: no problem (a) Event Time Date Party Agenda opt. appt 10 am Thursday sister \u2014 dinner 8 pm the 13th Ana \u2014 opt. appt 7 pm the 20th Je\ufb00 \u2014 opt. appt 4 pm the 13th Alex \u2014 ... ... ... ... ... (b) Table 4.1: Example dialogue from SMD (a) with the corresponding knowledge base snippet (b) \u2014 driver is the user, car is the system the same amount of data (with respect to in-domain utterances) while requiring no annotations whatsoever. 4.1 Few-Shot Dialogue Generation We \ufb01rst describe the task we are addressing in this chapter, and the corresponding base model. Speci\ufb01cally, here we work with dialogue data that is organised into multiple non-overlapping domains. We say that source domains can be used to train the model, whereas the target domain is mainly used for evaluation. Di\ufb00erent data-e\ufb03cient setups assume the corresponding usage of target-domain data. As such, in zero-shot learning, the full target dialogues are not used at all; few-shot learning assumes \ufb01ne-tuning to several target dialogues and then evaluating on the rest \u2014 we will be working in the latter setting. Our domains here are in-car navigation, weather information, and appointment scheduling \u2014 as represented in the SMD dataset we are going to work with (see Section 2.7 for a short description). In a single experiment, we work with 2 of those domains as source, and the third one becomes the target domain. An example dialogue from SMD is shown in Table 4.1 \u2014 every dialogue in the dataset comes with a snippet from the underlying Knowledge Base working a as simulation of the domain-speci\ufb01c search API. The dialogue system\u2019s task then is to (1) train on the available data from the source domains, while learning to extract relevant data from the KB snippets along the way, (2) \ufb01ne-tune to the given seed dialogues + KB snippets in the target domain, and (3) predict responses for the rest of the target-domain dialogues. 4.2 The Base Model In our approach, we are building on top of HRED \u2014 in particular, the copy-augmented model (Merity et al., 2017). HRED was discussed earlier in Section 2.3. Here, we are providing its \fChapter 4. Learning Dialogue Representations 72 Negative log-likelihood (NLL) optimisation objective which we are going to be building upon: LHRED = log pF d(ysys | F e(c, xusr)) (4.1) where xusr is the user\u2019s query, ysys is the system\u2019s response, c is the dialogue context, and F e and F d are respectively the hierarchical encoder and the decoder. We work with goal-oriented dialogues where part of the system\u2019s task is to provide the integration with an underlying database or API via the KB snippets, as described above. Given that such information may contain unseen token sequences for the most part, especially in the target domain, we use a copy mechanism in order to be able to directly transfer those tokens from the input into the system\u2019s responses. More speci\ufb01cally, we represent the KB info as token sequences and concatenate them to the dialogue context similarly to the CopyNet setup of Eric et al. (2017), the only di\ufb00erence is the actual copy mechanism implementation for which we employ the Pointer-Sentinel Mixture model (Merity et al., 2017; Zhao and Esk\u00e9nazi, 2018) \u2014 see the description of these models in Sections 2.3.5. Our model is thus supposed to produce goal-oriented responses for target dialogue contexts using the soft decision \u2014 whether to generate the next word from the vocabulary or to copy a token from earlier in the dialogue history (or at some position in the knowledge base response). 4.3 Dialogue Knowledge Transfer Networks Transfer learning is considered the key means for e\ufb03cient training with minimal data, and our DiKTNet model essentially introduces several knowledge-transfer augmentations to the base HRED model described above. DiKTNet training is performed in two stages described below. 4.3.1 Stage 1. Dialogue Representation Pre-training Dialogue structure \u2014 e.g. word sequences \u2014 is highly speci\ufb01c to a given domain or task, and the meaning of conversational utterances is highly contextual, i.e. similar utterances may have di\ufb00erent meanings depending on the context. Nevertheless, there is a lot of similarity in dialogue structure \u2014 i.e. sequences of dialogue actions \u2014 across domains, e.g. a conversation normally starts with a mutual greeting and a question is very often followed by an answer. Here, we propose to exploit this phenomenon in the form of learning a latent dialogue action representation in order to better capture the dialogue structure by abstracting away from surface forms. Crucially, we learn such representation from MetaLWOz (derived from Meta-Learning Wizard-of-Oz, Lee \fChapter 4. Learning Dialogue Representations 73 Figure 4.1: DI-VAE and DI-VST (DiKTNet Stage 1) et al., 2019a), a dataset speci\ufb01cally created for the purposes of meta-learning and transfer learning and consisting of human-human conversations in 51 unique domains \u2014 see the description of the dataset in Section 2.7 and some statistics below in Section 4.5. For this stage of training we use unsupervised, variational autoencoder-based (VAE) representation learning, and we will be particularly focusing here on a speci\ufb01c framework of discrete information (DI) sentence representations of Zhao et al. (2018). As will be discussed later in Section 4.7.2, the underlying variational autoencoding techniques facilitate more stable training and result in more e\ufb03cient representation models as compared to the more widely-used conventional VAE. As was introduced in Section 2.3.2 DI-VAE model di\ufb00ers from a classic, continuous VAE in that it (1) implicitly promotes mutual information between the model\u2019s input and its latent code and (2) uses Batch Prior Regularisation technique for the calculation of the KL divergence in the case of discrete latent codes. DI-VAE and its skip-thought counterpart DI-VST are visualised in Figure 4.1. In the downstream DiKTNet model, we use DI-VAE autoencoder in order to obtain the representation of the user\u2019s query: zusr = DI-VAE(xusr). DI-VST, in turn, is used to obtain a prediction of the system\u2019s action zsys in the discretised latent form given the user\u2019s input xusr as well as the full dialogue context c. For that, DI-VST autoencoder is used as part of a hierarchical, contextaware encoder-decoder response generation model \u2014 following Zhao et al., 2018, we refer to it as Latent Action Encoder-Decoder, or LAED (discussed in Section 2.3.3), its optimisation objective is as follows: \fChapter 4. Learning Dialogue Representations 74 LLAED = EqR(zsys|ysys)p(ysys,c,xusr) h log p\u03c0(zsys | c, xusr) + log pF (ysys | zsys, c, xusr) i (4.2) where F is the decoder generating the system\u2019s response ysys, and \u03c0 is the \u2018policy\u2019 feed-forward network predicting zsys from the dialogue context c and the user\u2019s last turn xusr. As noted in out introduction of LAED (Section 2.3.3), the recognition model qR(zsys | ysys) conditioned on the gold system\u2019s response is only used during training and is discarded at prediction time. Our intuition behind using di\ufb00erent models for representing the user\u2019s query and the system\u2019s latent action follows empirical results of Zhao et al. (2018) who showed that DI-VAE is better at capturing speci\ufb01c words of an utterance, while DI-VST represents the overall dialogue action better. We train these two models on MetaLWOz in an unsupervised way with the objectives as described above, and use their discretised latent codes zusr and zsys respectively in the downstream model at the next stage of training. 4.3.2 Stage 2. Transfer At this stage, we train directly for our target task, few-shot dialogue generation, and thus go back to the model described in Section 4.1. While the training procedure of this model naturally assumes domain transfer, we will provide it with more sources of textual and dialogue knowledge of varying generality described below. As opposed to direct domain transfer, we incorporate domain-general dialogue understanding from the LAED representation trained on MetaLWOz at the previous stage. LAED captures the background top-down dialogue structure: sequences of dialogue acts in a cooperative conversation, latent dialogue act-induced clustering of utterances, and the overall phrase structure of spoken utterances. We incorporate this information into the model by conditioning HRED\u2019s decoder on the combined latent codes from Stage 1 and refer to this model as HRED +Stage1. Its optimisation objective is as follows: LHRED+Stage1 = Ep(xusr,c)p(zusr|xusr)p\u03c0(zsys|xusr,c) h log pF d \u0010 ysys | n F e(xusr, c), zusr, zsys o\u0011i (4.3) where zusr and zsys are respectively samples obtained from the DI-VAE user utterance model and the LAED system action model, and {\u00b7} is the concatenation operator. \fChapter 4. Learning Dialogue Representations 75 Figure 4.2: DiKTNet Stage 2 (tokens in brackets are KB data) The last, most general source of knowledge we use is a pre-trained ELMo model (Peters et al., 2018). Apart from using an underlying bidirectional RNN encoder, ELMo captures both tokenlevel and character-level information which is especially crucial in understanding unseen tokens and KB items in the underrepresented target domain. The HRED model with ELMo as the utterance-level encoder is referred to as HRED +ELMo. Finally, DiKTNet is the HRED augmented with both ELMo encoder and sentence representations (DI-VAE and LAED) from Stage 1. DiKTNet is visualised in Figure 4.2. The model (as well as its variants listed above) is implemented in PyTorch (Paszke et al., 2017), and the code is openly available1. 4.4 Baselines We perform an exhaustive ablation study of DiKTNet by comparing it to all of its variations mentioned above: HRED, HRED +ELMo, and HRED +Stage1. In addition to that, we have the HRED +VAE \u2014 a HRED +Stage1 counterpart for which we use a regular, continuous VAE behind DI-VAE and DI-VST in order to determine the impact of discretised latent codes (see Eq. 2.14 for the corresponding objective function). Furthermore, we compare DiKTNet to the previous state-of-the-art approach, Zero-Shot Dialogue Generation (Zhao and Esk\u00e9nazi, 2018). This model did not use any complete in-domain dialogues but instead it relied on annotated utterances in all of the domains. We use it as-is (ZSDG), as well its variation as follows. 1http://tiny.cc/diktnet \fChapter 4. Learning Dialogue Representations 76 We make use of ZSDG\u2019s central idea of using NLU-annotated in-domain utterances as \u2018domain descriptions\u2019 that facilitate bridging dialogue understanding across domains, but instead of using manually annotated utterances, we employ automatic NLU markup. Our annotations include: \u2014 Named Entity Recognition \u2014 Stanford NER model ensemble of case-sensitive and caseless models (Finkel et al., 2005), \u2014 Date/time markup \u2014 Stanford SUTime (Chang and Manning, 2012), \u2014 Wikidata entity linking \u2014 Yahoo FEL (Blanco et al., 2015; Pappu et al., 2017). We serialise annotations from these sources into token sequences and make domain description tuples out of all the utterances in the source and target domains. In this way, most of our domain descriptions share the structure and content of the original ones. For example, for the phrase \u2018Will it be cloudy in Los Angeles on Thursday?\u2019: \u2014 the original ZSDG annotation is \"request #goal cloudy #location Los Angeles #date Thursday\", \u2014 our NLU annotation is \"LOCATION Los Angeles DATE Thursday\". We have two models in this setup, with (NLU_ZSDG+Stage1) and without the use of Stage 1 representations (NLU_ZSDG) respectively. 4.5 Datasets For the latent representation learning, we use MetaLWOz described in Section 2.7. All the domains available in the MetaLWOz dataset are listed in Table A.2 of Appendix A, and some example dialogues can be found there in Section A. Our target dataset is the Stanford Multi-Domain (SMD) dialogues corpus (Eric et al., 2017) described in Section 2.7. SMD contains human-human goal-oriented dialogues in three domains, with a simulation of the underlying search API: each dialogue in SMD comes with a knowledge base snippet representing the result of implicitly querying the API, with the KB schema (i.e. columns names and the data types) being speci\ufb01c to each domain. Although sharing some common features (the setting of an intelligent in-car assistant and the use of the underlying KB), the dialogues di\ufb00er signi\ufb01cantly across domains which makes the domain transfer su\ufb03ciently challenging. The statistics of the are shown in Tables 4.3 and 4.2, respectively. \fChapter 4. Learning Dialogue Representations 77 Domains 51 Dialogues 40,388 Avg. dialogue length 11.91 Table 4.2: MetaLWOz dataset statistics Navigation Weather Schedule Dialogues 800 797 828 Utterances 5248 4,314 3,170 Avg. dialogue length 6.56 5.41 3.83 Table 4.3: Stanford multi-domain dataset statistics (trainset) In our experiments, we make sure that all the source data have no domain overlap with the target dialogues we\u2019re evaluating on, therefore we make our training setup dynamic by excluding the speci\ufb01c MetaLWOz domains based on the target SMD one, such that: \u2014 for the Navigation target domain in SMD, we exclude MetaLWOz\u2019s Store Details domain, \u2014 for Weather, we exclude Weather Check, \u2014 for Schedule, we exclude Update Calendar and Appointment Reminder. 4.6 Experimental Setup and Evaluation Our few-shot setup is as follows. Given the target domain, we \ufb01rst train Stage-1 model(s) on the MetaLWOz data, having \ufb01ltered source domains as described above. We used a DI-VAE and a DI-VST-based LAED, both of the size 10 \u00d7 5. Next, having trained and Stage-1 models, we train DiKTNet on all the source domains from the SMD dataset without further \ufb01ne-tuning of DI-VAE/LAED. We also \ufb01ne-tune to a portion of target-domain data (thus working in a few-shot setup) by sampling the target dialogues together with their KB info, varying the amount of those from 1% to 10% of all the available target data. For the NLU_ZSDG setup, we annotated all available SMD data and randomly selected a subset of 1000 utterances from each source domain, and 200 utterances from the target domain. For source domains, this number amounts to roughly a quarter of all available training data \u2014 we chose it in order to make use of as much annotated data as possible while keeping the domain description task secondary. We made sure to keep under roughly the same target-domain data requirements as the ZSDG baseline. For evaluation, we follow the approach of Zhao and Esk\u00e9nazi (2018) and report BLEU and Entity F1 scores. Given the non-deterministic nature of our training setup, we report means and variances of our results over 10 runs with di\ufb00erent random seeds. We also perform an additional evaluation of DiKTNet\u2019s performance with extended amounts of target data and compare it to the original results for the SMD dataset reported by Eric et al. \fChapter 4. Learning Dialogue Representations 78 1 3 5 10 0 5 10 15 20 Target data ratio, % BLEU, % Mean BLEU across domains 1 3 5 10 10 20 30 40 50 Target data ratio, % F1, % Mean Entity F1 across domains ZSDG NLU_ZSDG NLU_ZSDG +Stage1 HRED HRED +VAE HRED +Stage1 HRED +ELMo DiKTNet Figure 4.3: Models\u2019 performance on the SMD dataset. Dashed are ZSDG baselines that use a \ufb01xed set of annotated in-domain utterances instead of raw dialogues (see Section 4.4) (2017) upon the dataset introduction. Their model \u2014 Key-Value Retrieval Network (KVRet), a variant of copy-augmented Seq2Seq with a separate copying mechanism for the KB snippets \u2014 was trained with all the available data for a given domain. In this evaluation, we average BLEU scores across all 3 SMD domains in order to be consistent with the form that the corresponding results are presented in the original paper. We train our models with the Adam optimiser (Kingma and Ba, 2015) with the learning rate of 0.001. Our hierarchical models\u2019 utterance encoder is an LSTM cell (Hochreiter and Schmidhuber, 1997) of the size 256, and the dialogue-level encoder is a GRU (Cho et al., 2014b) of the size 512. 4.7 Results and Discussion Our results are shown in Figure 4.3 \u2014 see also Table A.1 of the Appendix A for a more detailed breakdown. The former contains BLEU and Entity F1 scores averaged over target domains, and the latter has the corresponding values for each domain separately, showing means and variances. Our objective here is maximum accuracy with minimum training data. 4.7.1 Results for the Few-Shot Setup It can be seen that few-shot models with DI-VAE/LAED representation are the best performing models for this objective. While improvements upon ZSDG can already be seen with simple HRED in a few-shot setup, the use of the Stage-1 representation and domain-general ELMo encoding helps signi\ufb01cantly reduce the amount of in-domain training data needed: at 1% of indomain dialogues, we see that DiKTNet consistently and signi\ufb01cantly improves upon ZSDG in \fChapter 4. Learning Dialogue Representations 79 Where can I go shopping? Where does my friend live? Where can I get Chinese food? Where can I go to eat? Can you please take me to a co\ufb00ee house? I\u2019d like to set a reminder for my meeting at 2pm later this month please. What is the time and agenda for my meeting, and who is attending? Schedule a lab appointment with my aunt for the 7th at 1pm. Schedule a calendar reminder for yoga with Je\ufb00at 6pm on the 5th. Car I\u2019m desiring to do some shopping: which one is it the nearest shopping ... ... center? Anything within 4 miles? Get the address to my friend\u2019s house that i could get to the fastest Car I need to get to my friends house, it should be within 4 miles from here Table 4.4: Selected clusters of utterances sharing the same DI-VAE codes every domain. In SMD, with its average dialogue length of 5.25 turns, 1% of training dialogues amounts to approximately 40 in-domain training utterances. In contrast, the ZSDG setup used approximately 150 training utterance-annotation pairs for each domain, including the target one, totalling about 450 annotated utterances. Although in our few-shot approach we use full in-domain dialogues, we end up having signi\ufb01cantly less in-domain training data, with the crucial di\ufb00erence that none of those has to be annotated for our approach. Therefore, the method we introduced improves upon the previous best approach in both accuracy and data-e\ufb03ciency. In turn, the results of the ZSDG_NLU setup demonstrate that single utterance annotations, if not domain-speci\ufb01c and produced by human experts, do not provide as much signal as full dialogues, even without annotations at all. Even the signi\ufb01cant number of such annotated utterances per domain did not make a di\ufb00erence in this case. We would also like to point out that, as can be seen in the table, our results have quite high variance \u2014 the main source of it is the nature of our training/evaluation setup where we average over 10 runs with 10 di\ufb00erent sets of seed dialogues. However, in the majority of cases with comparable means, DiKTNet has a lower variance than the alternative models at the same percentage of seed data. And in the extreme case with 1% target data, DiKTNet improves on all the other models in terms of both means and variances. 4.7.2 Discussion of the Latent Representations The comparison of the setups with di\ufb00erent latent representations also gives us some insight: while the VAE-powered HRED model improves on the baseline in multiple cases, it lacks generalisation potential compared to the DI-VAE/LAED setup. The reason for that might be the inherently more stable training of DI-VAE/LAED due to their modi\ufb01ed objective function, \fChapter 4. Learning Dialogue Representations 80 0 10 20 30 40 50 0 10 20 30 40 50 Target data ratio, % BLEU, % (all domains) 0 10 20 30 40 50 0 10 20 30 40 50 Target data ratio, % Entity F1, % (navigate) 0 10 20 30 40 50 20 30 40 50 60 70 Target data ratio, % Entity F1, % (weather) 0 10 20 30 40 50 20 30 40 50 60 70 Target data ratio, % Entity F1, % (schedule) KVRet @100% DiKTNet Figure 4.4: DiKTNet performance with extended amounts of target data used for training which in turn results in a more informative representation providing better generalisation. For instance, with the \u2018vanilla\u2019 VAE setup, we immediately experienced the commonly reported vanishing KL term problem (discussed previously in Section 2.3.3) which e\ufb00ectively turned our VAE into a signi\ufb01cantly over\ufb01tted AE. With the discrete-information models \u2014 both DI-VAE and DI-VST \u2014 we did not experience this problem even without using any techniques that are considered crucial in the training of VAEs (Bowman et al., 2016). In order to have a glimpse into the DI-VAE-produced clustering, in Table 4.4 we present a snippet of the utterance clusters sharing the same, most frequent latent codes throughout the dataset (the clustering is obtained with DI-VAE model trained on every domain but \u2018Store details\u2019, i.e. the one for the evaluation on \u2018Navigate\u2019 SMD domain). From this snippet, it can be seen that those clusters work well for domain separation, as well as capturing dialogue intents. 4.7.3 Results with Extended Data We performed an additional experiment with extended amounts of target data (see Figure 4.4). It showed that DiKTNet, when trained with as little as 5% of target data, can outperform a KVRet \fChapter 4. Learning Dialogue Representations 81 Domain Context Gold response Predicted response schedule Remind me to take my pills Ok setting your Okay, setting a What time do you need medicine appointment reminder to take to take your pills? for 7pm your pills at 7 pm. I need to take my pills at 7 pm. navigate Find the address to a hospital Have a good day No problem. Stanford Express Care is at 214 El Camino Real. Thank you. weather What is the weather forecast For what city would you For what city would for the weekend? like to know that? you like the weekend forecast for? Table 4.5: DiKTNet\u2019s selected responses trained using the entire dataset. Furthermore, with 50% of the target data, DiKTNet becomes more than twice as good as KVRet in terms of overall language generation. However, goal-oriented metrics such as Entity F1 are more challenging to bootstrap. As such, DiKTNet outperforms KVRet on \u2018Weather\u2019 domain starting at 10% of the target data, but only has a trend on narrowing down the performance gap with KVRet on \u2018Navigate\u2019, and certainly needs more training data in the \u2018Schedule\u2019 domain. The explanation for that might be that most of the dialogue entities come from the KB snippets which are the least represented resource in our setup. They are not available in MetaLWOz, and in SMD, KB snippets share little in common across domains. Therefore, in order to increase Entity F1, KB information should be directly copied to the output more e\ufb03ciently \u2014 and increasing the robustness of the copy-augmented decoder is one of our future research directions. 4.7.4 Discussion of the Evaluation Metrics We use BLEU as one of the main evaluation metrics in this work \u2014 we do it in order to fully conform with the setup of Zhao and Esk\u00e9nazi (2018) which we base our work on. But while being widely adopted as a general-purpose language generation metric, BLEU might not be suf\ufb01cient in the dialogue setting (see Novikova et al. (2017) for a review). Speci\ufb01cally, we have observed several cases where the model would produce an overall grammatical response with the correct dialogue intent (e.g. \u201cYou are welcome! Anything else?\u201d), but BLEU would output a lower score for it due to word mismatch (e.g. \u201cYou\u2019re welcome!\u201d; see more examples in Table 4.5). This is a general issue in dialogue model evaluation since the variability of possible responses equivalent in meaning is very high in dialogue. We think that putting more emphasis on the meaning of utterances, for example by incorporating external dialogue act tagging resources in the evaluation setup which, together with general language generation metrics like perplexity, can make for more robust evaluation criteria than word overlap. \fChapter 4. Learning Dialogue Representations 82 4.8 Conclusion In this chapter, we have introduced DiKTNet, a model achieving strong dialogue response generation performance in a few-shot setup, without using any annotated data. By transferring latent dialogue knowledge from multiple sources of varying generality, we obtained a model with superior generalisation to underrepresented domains. Speci\ufb01cally, we showed that our few-shot approach improves upon the previous best model on the Stanford Multi-Domain dataset while being more data-e\ufb03cient, by requiring signi\ufb01cantly less data none of which has to be annotated. In the next chapter, we will continue our study of low-resource dialogue generation by addressing the problem of fast adaptation of a dialogue system to a new domain, at a greater scale of 47 information-seeking domains (i.e. a version of MetaLWOz which becomes our target dataset). As such, we will explore alternative ways of using the support in-domain data other than \ufb01netuning the base model on it. \fChapter 5 Dialogue Domain Adaptation In this chapter, we take our research on few-shot dialogue modelling further and continue with the problem of fast domain adaptation for dialogue systems. As argued in the previous chapters, domain adaptation is the key approach to the development of data-e\ufb03cient dialogue systems in the machine learning framework; here we are going to explore this problem at a greater scale, i.e. through the Eighth Dialog System Technology Challenge (DSTC), Fast Domain Adaptation task. Speci\ufb01cally, we propose the hybrid Generative-Retrieval1 Transformer, GRTr2 \u2014 a model leveraging knowledge transfer from a large-scale pre-trained general-purpose language model and combining it with the response retrieval logic. The model is able to maintain goal-oriented dialogue in a closed domain having only been exposed to a small set of in-domain dialogues as the domain description. Our hybrid model is ranked 1st on the MetaLWOz dataset as per human evaluation, and also performs competitively on automated metrics when compared to other baselines \u2014both generation-only and retrieval-only models. 5.1 Fast Domain Adaptation of a Dialogue System As we saw in the previous chapter, few-shot knowledge transfer is a promising way to adapt a goal-oriented dialogue system to a new domain. In this chapter, we are going to continue working in the framework of MetaLWOz dataset, but will increase the scale by making it our target dataset. Since MetaLWOz does not contain any goal-oriented annotations but overall represents cooperative information-seeking dialogue between two humans, this task focuses on predicting the utterances on the user\u2019s side. This can be considered a more challenging task since normally, user\u2019s utterances are less predictable than those of the system, and a prospective successful dialogue model should have a representation of the underlying user\u2019s goal in order to 1Following Radford et al. (2018) and Zhang et al. (2020), we use the term \u201cgenerative\u201d in the sense of predicting (or generating) the next word given the context, i.e. language generation. 2Code is available at http://tiny.cc/grtr 83 \fChapter 5. Dialogue Domain Adaptation 84 Figure 5.1: Example DSTC-8 support set (background) and target dialogue (foreground) in the travel domain generate relevant queries to the system. Our task takes place within the DSTC-8 which we are going to describe next. 5.1.1 DSTC-8, Fast Domain Adaptation Task In the Eight Edition of DSTC, its Domain Adaptation task focuses on building a model that predicts user responses for a goal-oriented dialogue system for which only limited in-domain data is available. The possible applications of an adaptive user-side dialogue model include Reinforcement Learning-based setups which are highly dependent on the quality of the user simulator, as well as data augmentation approaches for improving robustness and coverage of the target models. The in-domain adaptation data could be collected from e.g. customer service transcripts, or written by the developers themselves. From this in-domain data, the support set, one would like to extrapolate responses to novel dialogue contexts (the target) \u2014 see example in Figure 5.1. However, the support set is typically too small to train a dialogue response generation model. Instead, the approach assumed in the challenge is to adapt (or \ufb01ne-tune) a generic dialogue model trained on a large corpus of conversations over multiple source domains. Technically, the problem setup is as follows: having trained the base model on the source domains, the model is then fed with one target dialogue and a support set at a time. The model\u2019s task is to predict the next user turn of the target dialogue, taking into account the support set before producing a prediction. At prediction time, each target dialogue is processed in isolation \fChapter 5. Dialogue Domain Adaptation 85 Figure 5.2: GRTr model diagram. We (a) encode the target dialogue context and (b) produce the \u2018generated candidate\u2019; next, we (c) encode support dialogue contexts in a similar way, then (d) \ufb01nd the nearest \u2018support\u2019 neighbour and select its response as the \u2018retrieved candidate\u2019; \ufb01nally, we (e) rank the two candidates given the target context and produce the \ufb01nal result. from other target dialogues, such that the model cannot use knowledge or state obtained from other target/support data. 5.2 Proposed Model We use a language model pre-trained on a very large and diverse collection of textual data providing a strong language prior and then adapt the model for our tasks in the form of \ufb01ne-tuning. Our base model is GPT-2 (Wolf et al., 2019), a transformer-based language model. In order to adapt GPT-2 for dialogue generation, we \ufb01rst augment the input embedding for each token in the dialogue with (1) a speaker tag embedding identifying the speaker and (2) a turn embedding, identifying the turn number in the current dialogue. These additional embedding matrices are learned solely using the dialogue data. The input token embeddings are then obtained by summing up these representations. We also add two task-speci\ufb01c output layers (or \u201cheads\u201d) for our purposes: a language modelling (LM) head and a next-sentence prediction (NSP) classi\ufb01cation head, both trained from randomly initialised parameters. We \ufb01ne-tune GPT-2 for response generation by minimising the negative log-likelihood of response tokens given the concatenation of dialogue context and the previous tokens in the response, LLM = \u2212log PLM(X | C) = \u2212 |X| X i=1 log PLM(xi | xi\u22121, ..., x1,C), (5.1) where X is the response and C is the dialogue context, i.e. the concatenation of the tokens in the previous utterances. \fChapter 5. Dialogue Domain Adaptation 86 To predict the next sentence, we proceed as follows: given a context/response pair (C, X), the classi\ufb01cation head is trained to produce a binary label y, which is 1 if X is the correct response given the context C, and 0 if X is a distractor (a random utterance from the corpus). We minimise the following binary cross-entropy: LNSP = \u2212y log PNSP(y | X,C) \u2212(1 \u2212y) log PNSP(1 \u2212y | X,C), (5.2) PNSP(y | X,C) = softmax(fNSP(hX,C)), (5.3) where hX,C is the last hidden state of the last GPT-2 layer after having encoded the concatenation of X and C and fNSP is the next-sentence prediction head (in our case a simple linear transformation). In practice, for each (C, X) pair in the corpus, we sample 1 distractor \u00af X. We obtain a suitable dialogue prior by \ufb01ne-tuning the modi\ufb01ed GPT-2 model on the source domains with both the language modelling and next-sentence prediction tasks as described above, therefore minimising L = LNSP + LLM. 5.2.1 Fine-tuning on Target Domains and Prediction As every test dialogue in the target domain/task is accompanied with a small support set of dialogues from the same domain/task, we make use of this data by further \ufb01ne-tuning the dialogue model on the support dialogues. Crucially, we make sure not to accumulate any information between test dialogues: after each \ufb01ne-tuning on the support set, we reset the weights of the model to the dialogue prior obtained by the \ufb01ne-tuning stage described in the previous section. In order to add diversity to the responses, GPT-2 uses nucleus (top-p) sampling (Holtzman et al., 2020) during generation, i.e. the model\u2019s vocabulary V is pruned to V p, the smallest set such that X x\u2208V p p(x | x1:i\u22121,C) \u2265p, (5.4) and the \ufb01nal distribution from which the words are sampled is rescaled as follows: P\u2032(x | x1:i\u22121) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 P(x|x1:i\u22121,C) P x\u2208V p P(x|x1:i\u22121,C) if x \u2208V(p) 0, otherwise. (5.5) \fChapter 5. Dialogue Domain Adaptation 87 5.2.2 Hybrid Generative-Retrieval Prediction Algorithm 1: Hybrid generative-retrieval response prediction Input: Enc \u2014 GPT-2 encoder Input: Dec \u2014 GPT-2 decoder (language modelling head) Input: NSP \u2014 GPT-2 next sentence prediction head Input: t \u2014 turn number to predict Input: X_tgt \u2014 target dialogue context of length t \u22121 Input: X_sup \u2014 support dialogues (sequences of turns), each of length \u2265t 1 emb_tgt \u2190Enc(X_tgt) 2 foreach i \u22081 . . . |X_sup| do 3 emb_supi \u2190Enc(X_supi,1...t\u22121) 4 end /* Euclidean distance used */ 5 j \u2190arg min i { dist(emb_tgt, emb_supi), i \u22081 . . . |X_sup| } 6 y_gen \u2190Dec(emb_tgt) 7 y_ret \u2190X_supj,t 8 cands \u2190[ y_gen, y_ret ] /* \u2295denotes concatenation */ 9 k \u2190arg max i { NSP(Enc(X_tgt \u2295candsi)), i \u22081 . . . |cands| } 10 return candsk In our experiments, we found that retrieval baselines are quite e\ufb00ective in the automatic metrics considered. Therefore, we combined retrieval techniques with our response generation model in a hybrid approach \u2013 see Algorithm 1. The retrieval component is set up as follows: when predicting the t-th turn of the test dialogue, the model embeds its context of length t \u22121 as well as all the support dialogue contexts of the same length t \u22121 using the \ufb01ne-tuned dialogue encoder. The encoding for the dialogue context is the hidden state of the last layer of the Transformer model at the position corresponding to the last token in the context. Then, it selects the nearest support context to the target context and picks its t-th turn as the retrieved candidate response. Finally, the model\u2019s own generated response and the best retrieved candidate response are ranked using the NSP classi\ufb01cation head, i.e. both responses are concatenated with the ground-truth context and the one with the higher PNSP (Eq. 5.3) is selected. The above steps are visualised in Figure 5.2. \fChapter 5. Dialogue Domain Adaptation 88 5.3 Baselines and Competing Models We compare our hybrid model to the retrieval baselines provided by the DSTC-8 organisers. The baselines ignore the training data and rely solely on the support sets: they embed each support dialogue\u2019s context and \ufb01nd the one nearest to the target context using cosine distance as the metric. They then return the turn following the identi\ufb01ed context as the predicted response. There are two retrieval-only baselines, which di\ufb00er in their encoder: (1) BERT-based (Devlin et al., 2019), taken o\ufb00-the-shelf, and (2) SentencePiece/FastText-based \u2014 representing text as sequences of subword units (\u2018pieces\u2019), with subword tokenisation logic trained in an unsupervised way, in our case on on the Reddit Conversations corpus (the approach is modelled after Gu et al., 2018). Another baseline provided is a generation-only model, a bidirectional LSTM-based HRED (Serban et al., 2016) trained on MetaLWOz. All the submissions at the \ufb01nal stage of the challenge are as follows (Li et al., 2020): \u2014 Team A trained a BiLSTM on the provided Reddit corpus, then \ufb01ne-tuned the model at test-time using a mixture of MetaLWOz and MultiWOZ support dialogues, augmented to the context of the target dialogue, and dynamically-sampled Reddit threads, \u2014 Team B \u2014 the work described in this chapter, \u2014 Team C \ufb01rst \ufb01ne-tuned GPT-2 on the MetaLWOz training corpus, then \ufb01ne-tuned it further on the support sets of the MetaLWOz and MultiWOZ test sets, \u2014 Team D trained a BiLSTM encoder and attentional LSTM decoder on both Reddit and MetaLWOz training corpora, without any \ufb01ne-tuning to the test sets. 5.4 Datasets We use the main dataset for DSTC-8 Track 2 \u201cFast Domain Adaptation\u201d MetaLWOz which we described earlier in Section 2.7. Example dialogues from MetaLWOz can be found in Appendix A. For evaluation purposes, the challenge organisers provide MultiWOZ also described in Section 2.7. MultiWOZ is not present at the base training stage, and a given dialogue model only gets exposed to this data via support dialogues during the adaptation stage, therefore it is used as a means for evaluating the adaptation performance to the data substantially di\ufb00erent from the main trainset. Dialogues in MultiWOZ contain NLU annotations, particularly for intent and \fChapter 5. Dialogue Domain Adaptation 89 Utterance I am looking for a particular restaurant . It is called pizza hut city centre . Markup {\u2019Restaurant-Inform\u2019: [[\u2019Name\u2019, \u2019pizza hut city centre\u2019]]} Utterance I am looking for a place to to stay that has cheap price range it should be in a type of hotel Markup {\u2019Hotel-Inform\u2019: [[\u2019Type\u2019, \u2019hotel\u2019], [\u2019Price\u2019, \u2019cheap\u2019]]} Utterance I would like a taxi from Saint John \u2019s college to Pizza Hut Fen Ditton . Markup {\u2019Taxi-Inform\u2019: [[\u2019Dest\u2019, \u2019pizza hut fen ditton\u2019], [\u2019Depart\u2019, \"saint john \u2019s college\"]]} Utterance Okay that will work . Can you please tell me their phone number , postcode and the entrance fee ? Markup {\u2019Attraction-Request\u2019: [[\u2019Fee\u2019, \u2019?\u2019], [\u2019Post\u2019, \u2019?\u2019], [\u2019Phone\u2019, \u2019?\u2019]]} Utterance Just any time after 10:00 , can I get the train ID of one of them please ? Markup {\u2019Train-Inform\u2019: [[\u2019Leave\u2019, \u201910:00\u2019]], \u2019Train-Request\u2019: [[\u2019Id\u2019, \u2019?\u2019]]} Table 5.1: Example annotated utterances from MultiWOZ slots, which we use in order to to evaluate the systems\u2019 goal-oriented performance \u2014 see some example utterances in Table 5.1. 5.5 Experimental Setup and Evaluation We perform training in two stages: training of the base model and \ufb01ne-tuning it to the target dialogue\u2019s support set. At the \ufb01rst stage, we train the model for the maximum of 5 epochs with early stopping. The \ufb01ne-tuning stage goes on for 1 epoch. GPT-2 models use the context of 3 exchanges, or 5 turns: bot-user-bot-user-bot, predicting the next user\u2019s utterance. We mainly used the \u2018small\u2019 GPT-2 checkpoint by HuggingFace \u2014 we also tried the \u2018medium\u2019 one, but found no improvement with it in our task. 5.5.1 Human Evaluation Rank Submission Win rate (%) 1 Gold response 62.32 2 Team B (ours) 56.85 3 Team C 52.07 4 Team A 47.35 5 Baseline 1 44.18 6 Team D 37.34 Table 5.2: Ranking from judges\u2019 pairwise comparisons The main systems\u2019 goal is to generate appropriate responses towards maintaining a natural cooperative dialogue on the user\u2019s side, so the main evaluation is performed involving human judges. Speci\ufb01cally, Amazon Mechanical Turk workers were tasked to compare the candidate responses given the dialogue context. Each comparison was pairwise between the results of two systems presented in random order. Judges ranked the responses against the following criteria (Li et al., 2020): \fChapter 5. Dialogue Domain Adaptation 90 GOLD TeamB TeamC TeamA BASELINE TeamD system 1 2 3 4 5 6 rank appropriate, n=100 GOLD TeamB TeamC TeamA BASELINE TeamD system informative, n=100 GOLD TeamB TeamC TeamA BASELINE TeamD system useful, n=100 GOLD TeamB TeamC TeamA BASELINE TeamD system easy_to_answer, n=100 Figure 5.3: DSTC-8 Fast Domain Adaptation \u2014 human evaluation \u2014 Usefulness \u2014 whether the response is useful given the dialogue context and the user\u2019s overall \ufb01nal goal, \u2014 Informativeness \u2014 whether the response speci\ufb01cally contains information relevant to the conversation, \u2014 Appropriateness \u2014 whether the response is appropriate (on-topic, of a reasonable length, not repetitive) to the conversation, \u2014 Easiness to answer \u2014 given a hypothetical conversational bot on the system side, whether the response will be a valid input for it and presumably straightforward to process. For each pairing, 3 independent comparisons were performed against each metric. The number of comparisons required was reduced by letting the Multisort algorithm (Maystre and Grossglauser, 2017) determine which responses to compare, causing more similar systems with similar performance to be compared more often with each other. Bootstrapping over the 100 randomly chosen dialogue contexts was used to determine average ranks and assess the ranking robustness (Hall et al., 2009). 5.5.2 Automatic Evaluation In addition to human evaluation, we also assess model performance using automatic metrics. The models were evaluated on MetaLWOz against word-overlap metrics such as BLEU-1\u20133, CIDEr, METEOR, ROUGE-L using the NLGEval package (Sharma et al., 2017). Although not ideal for the speci\ufb01cs of dialogue and spoken language in general (Lowe et al., 2017; Dziri et al., 2019), such metrics approximate the overall quality of a response generation model and are especially useful for intermediate evaluation. We evaluate models in two modes on MetaLWOz: in pure task, support dialogues are drawn from the same domain and task as target dialogue; in cross-task, support and target dialogues are from the same domain, but di\ufb00erent tasks. \fChapter 5. Dialogue Domain Adaptation 91 BLEU-1 BLEU-2 BLEU-3 ROUGE-L 0 5 10 15 Percentage MetaLWOz pure task (%) BLEU-1 BLEU-2 BLEU-3 ROUGE-L 0 5 10 15 MetaLWOz cross-task (%) Ret. BERT Ret. SP+FT HRED GPT-2 base GPT-2 +sup GRTr Figure 5.4: Automatic evaluation on MetaLWOz We also perform additional evaluation of Entity/Intent F1 of the MultiWOZ dataset in pure task mode with pre-trained NLU taggers from the ConvLab package (Lee et al., 2019b). There is no MultiWOZ data available at the \ufb01rst stage (base model training), so all the exposure our model has to this dataset is via support dialogues. Complementary to MetaLWOz evaluation, this stage is designed for assessing the models\u2019 goal-oriented performance. 5.6 Results and Discussion 5.6.1 Human Evaluation Results of pairwise comparisons are shown in Table 5.2. Our GRTr system\u2019s responses (Team B) were preferred by the judges in 56% of direct comparisons. This surpasses the next best system (Team C) performance by more than 4%, with only the gold human responses being chosen more frequently. Furthermore, from the bootstrap ranking distribution (Figure 5.3, lower rank numbers are better), we see that, apart from the gold human responses (blue graphs), our model\u2019s outputs (orange graphs) are consistently preferred over other submissions by the judges. Of all metrics used, the most notable are \u2018appropriateness\u2019 and \u2018usefulness\u2019. On the former, GRTr responses have the second visible peak at rank 1 competing with gold responses. On usefulness however, rank 1 is held by the gold responses with no variation, and our model has the second visible peak at rank 3, thus almost tying with Team C (green graphs). \fChapter 5. Dialogue Domain Adaptation 92 Intent F1 Slot F1 0 20 40 60 80 100 Percentage Ret. BERT Ret. SP+FT HRED GPT-2 base GPT-2 +sup GRTr Team D Team C Team A Figure 5.5: Automatic evaluation on MultiWOZ 5.6.2 Automatic Evaluation Results on MetaLWOz and MultiWOZ against automatic evaluation metrics are shown in Figures 5.4 and 5.5, respectively (more detailed MetaLWOz evaluation is presented in Tables B.1 and B.2 of Appendix B). We observe that retrieval baselines attain very competitive performance on both datasets, with FastText embeddings from Reddit leading to overall better results than o\ufb00-the-shelf BERT, especially in the pure task setting. With GRTr, we performed an ablation study to have a closer look into its performance. We evaluated three versions: \u2014 GPT-2 base, a generation-only model trained on MetaLWOz and not making use of the support data, \u2014 GPT-2 +sup, the base model \ufb01ne-tuned to support data, also not using the retrieval logic, \u2014 GRTr, our full hybrid model. As seen in Figure 5.4, there is strong dependence on support dialogues (\u2018base\u2019 vs. \u2018+sup\u2019) as the base model mostly struggles to compete with the baselines. Adding retrieval logic (\u2018GRTr\u2019 vs. \u2018+sup\u2019) results in further performance gains. HRED and GPT-2 base, the two models that did not use support dialogues, had comparable performance on MetaLWOz. In goal-oriented metrics on MultiWOZ (see Figure 5.5), the same performance pattern is observed with retrieval models, but GPT-2 in the generation-only version performs surprisingly \fChapter 5. Dialogue Domain Adaptation 93 booking \ufb02ight hotel reserve tourism vacation ideas 0 20 40 60 80 100 100 100 100 100 64.1 63.8 57.4 61.7 Percentage MetaLWOz pure task booking \ufb02ight hotel reserve tourism vacation ideas 0 20 40 60 80 100 100 100 100 100 68.2 74.8 73.9 74.7 Percentage MetaLWOz cross-task attraction hospital hotel police restaurant taxi train 0 20 40 60 80 100 100 100 100 100 100 100 100 55.6 60 63 52.1 61.3 64.3 61 Percentage MultiWOZ generated retrieved Figure 5.6: GRTr generated/retrieved response ratios better when not \ufb01ne-tuned to support set (\u2018base\u2019). On the other hand, the hybrid model experiences even more performance gain than on MetaLWOz. Presumably, generating responses for this dataset is harder due to the fact that it is not represented at the main training stage, and there is not much utterance overlap with MetaLWOz, so little knowledge transfer takes place in this experiment. Compared to other submissions, we observe that GRTr still outperforms most of the competitors and only gives way to Team A\u2019s system. We hypothesise here the best MultiWOZ model (Team A) was \ufb01tted to the automatic evaluation metrics too tightly, with the negative side e\ufb00ect observable in human evaluation results of Table 5.2 and Figure 5.3, where this system was prevalently ranked 4th and 5th. 5.6.3 Analysis of The Generated/Retrieved Responses \fChapter 5. Dialogue Domain Adaptation 94 Context Wiz Hello how may I help you? Usr I need some advice about Rome. Wiz What do you want to know? Gold response Usr I like history, where should I go? Predicted response Usr Who was the founder of Rome? Context Wiz Hello, how may I help you? Usr I\u2019m looking for a train to broxbourne. Wiz Where would you like to leave from, and what day and time would you like to travel? Usr I\u2019m going to be leaving from Cambridge. I need to leave on Tuesday by 13:00. Wiz Is there a time you would like to get there by? Gold response Usr I don\u2019t have an arrival preference. Predicted response Usr No I don\u2019t have a leave time preference. Just not too early if I don\u2019t have too. Context Wiz Hello how may I help you? Usr I\u2019d like to book a \ufb02ight. Wiz Okay, any particular place in mind? Usr How am I supposed to book a \ufb02ight to Greece? Me and my kawaii girlfriend were wondering. She says \u201cOoooo, Greece. :3\u201d Wiz I can book a \ufb02ight for two if you want, There will be an evening \ufb02ight Usr Hmm, wait, I don\u2019t really wanna book a \ufb02ight. We were just curious! She says \u201cHey! No bookies! :(\u201d Wiz Oh, I was confused by that. Can I have your email address so I could send several \ufb02ight options for you Gold response Usr Sure thing! My email address is \u201cweeabooking@otaku.corn\u201d. She says \u201cI wanna watch my anime now! ;_;\u201d Predicted response Usr Well, I guess I\u2019ll just get back to you. Thanks! Table 5.3: GRTr example responses In Figure 5.6, we show per-domain ratios of retrieved/generated responses from the hybrid model. We \ufb01nd that the majority of the responses are generated, and the retrieval logic works as the fallback option. On MetaLWOz, which the model had more exposure to during the training, generated responses ratio is generally slightly higher than that on MultiWOZ which was only seen by the model via support dialogues. Consequently, the model\u2019s overall con\ufb01dence on this dataset is lower, which results in more frequent fallbacks. Generated candidates rarely duplicate the retrieved ones: we found that the percentage of predictions with identical generated/retrieved candidates is 0.7% (16 in total) for MetaLWOz pure, 0.6% (15 in total) for MetaLWOz cross, and 0.3% (10 in total) for MultiWOZ. Also, more detailed information on the distribution of pairwise distances between GRTr response candidates can be found in Figures B.1 \u2014 B.3 of Appendix B. Moreover, in Tables B.3 \u2014 B.8 of Appendix B, we show GRTr example predictions with the closest generated and retrieved candidates, as well as the most distant ones for MetaLWOz pure task, MetaLWOz cross-task, and MultiWOZ datasets. We observe that the generated/retrieved candidates which were scored close to each other are either paraphrases of some generic phrase (e.g. \u201cOK I\u2019ll go with that\u201d, \u201cI like the idea\u201d) or both work well in the dialogue context (see examples for more detail). On the other hand, in \fChapter 5. Dialogue Domain Adaptation 95 cases with a higher di\ufb00erence between the generated and the retrieved candidate\u2019s scores, most of the time we observe that the retrieved one didn\u2019t match the context very well. Overall, we observe in Table 5.3 that there are many cases in the data where the gold response cannot possibly be inferred from the dialogue context. Speci\ufb01cally, the task was posed in the way that no extra data, such as a knowledge base or task description, was provided to the system \u2014 therefore, the main goal intended for the hypothetical ideal system is to naturally model human responses in a co-operative goal-oriented dialogue, and to do that in a data-e\ufb03cient way. This is re\ufb02ected in the way human judges are asked about response quality. 5.7 Conclusion We presented a hybrid generative-retrieval approach to goal-oriented dialogue with fast domain adaptation via transfer learning. It attains robust and diverse language generation performance across domains, and uses retrieval logic as a fallback mechanism in cases of low con\ufb01dence. Our method is ranked 1st by human judges of DSTC-8 Fast Domain Adaptation task, and it attains performance superior to a series of baselines in automated metrics on MetaLWOz and MultiWOZ datasets. The future directions of this research mainly include incorporating a more principled \ufb01ne-tuning technique (i.e. \u2018learning to learn\u2019) and will be discussed in detail in Chapter 9. \fChapter 6 Spoken Dis\ufb02uency Detection Starting with this chapter, we will focus on practical aspects of data-e\ufb03cient dialogue modelling. The problem we are going to address here refers back to our study in Chapter 3. As we saw, neural dialogue models lack robustness to certain aspects of spoken language such as high surface variability and the presence of dis\ufb02uencies (e.g. self-corrections, hesitations, restarts). And although robustness can be improved using more representative training data, it will require datasets of impractical sizes to account for all the linguistic phenomena of interest while keeping the performance on the target downstream task the main priority. From a developer\u2019s point of view though, it is highly desirable to be able to develop systems which can be trained from \u2018clean\u2019 examples while also able to generalise to the very diverse dis\ufb02uent variations on the same data \u2014 thereby enhancing both data-e\ufb03ciency and robustness. Therefore, it can be bene\ufb01cial for the dialogue systems research to develop a dedicated model that detects such variations in the user\u2019s input and passes this information into the downstream dialogue pipeline \u2014 either modular or end-to-end. In this chapter, we present a multitask LSTM-based model1 for the incremental detection of dis\ufb02uency structures, which can be hooked up to any component for incremental interpretation (e.g. an incremental semantic parser), or else simply used to \u2018clean up\u2019 the current utterance as it is being produced. We train the system on the Switchboard Dialog Acts (SWDA) corpus and present its accuracy on this dataset. Our model outperforms prior neural network-based incremental approaches by about 10 percentage points on SWDA while employing a simpler architecture. To test the model\u2019s generalisation potential to goal-oriented utterances with their characteristic types of dis\ufb02uency patters, we evaluate the same model on the bAbI+ dataset (presented in Section 3.3), without any additional training. This shows that our approach has good generalisation potential, and sheds more light on which types of dis\ufb02uency might be amenable to domain-general processing. 1Tensor\ufb02ow (Abadi et al., 2015) and PyTorch (Paszke et al., 2017) implementations together with the trained models are available at http://bit.ly/multitask_disfluency 96 \fChapter 6. Spoken Dis\ufb02uency Detection 97 6.1 Motivation As discussed in Section 2.6.1, spontaneous spoken dialogue is often dis\ufb02uent, containing pauses, hesitations, self-corrections, and false starts. Processing such phenomena is essential in understanding a speaker\u2019s intended meaning and controlling the \ufb02ow of the conversation. Furthermore, this processing needs to be word-by-word incremental to allow further downstream processing to begin as early as possible in order to handle real spontaneous human conversational behaviour. In this chapter, we build upon the previous best approaches to incremental dis\ufb02uency detection, the neural models of Hough and Schlangen (2015) and Schlangen and Hough (2017). Our contributions are that: (1) we produce a new multitask LSTM-based model with a simpler architecture for incremental dis\ufb02uency detection, with signi\ufb01cantly improved performance on the SWDA, a dis\ufb02uency-tagged corpus of open-domain conversations; and (2) we perform a generalisation experiment measuring how well the models perform on unseen data using the controlled environment using bAbI+ (Shalyminov et al., 2017), a dataset containing goal-oriented dialogue utterances with the speci\ufb01c vocabulary and syntactic structures which make it fundamentally di\ufb00erent from SWDA. 6.2 A Multitask LSTM-based Model for Spoken Dis\ufb02uency Detection Our approach to dis\ufb02uency detection is a sequence tagging model which makes single-word predictions given context words wt\u2212n+1, ..., wt of a maximum length n. We train it to perform two tasks jointly (c.f. Hough and Schlangen, 2015): 1. predicting the dis\ufb02uency tag of the current word, P(yt|wt\u2212n+1, ..., wt), and 2. predicting the next word in the sequence in a language model way, P(wt+1|wt\u2212n+1, ..., wt). At training time, we optimise the two tasks jointly, but at test time we only look at the resulting tags and ignore the LM predictions. Our model uses a shared LSTM encoder Hochreiter and Schmidhuber (1997) with combined \u2018Word/Part-of-Speech tag\u2019 tokens which provides context embedding for two independent multilayer perceptrons (MLPs) making the predictions for the two tasks \u2014 see Figure 6.1. The combined token vocabulary (word+POS) size for the SWDA dataset is approximately 30% larger than the original word-only version \u2014 given this, concatenation is the simplest and most e\ufb03cient way to pass POS information into the model. \fChapter 6. Spoken Dis\ufb02uency Detection 98 Figure 6.1: Multitask LSTM dis\ufb02uency detector architecture The intuition behind adding an additional task to optimise for is that it serves as a natural regulariser: given an imbalanced distribution of a very few labels (see Section 6.3 for the dataset description), only learning dis\ufb02uency labels may lead to a higher degree of over\ufb01tting, and introducing an additional task with a signi\ufb01cantly wider output space can help the model generalise better. Other potential bene\ufb01ts of having the model work as an LM is the possibility of unsupervised model improvements, e.g. pre-training of the model\u2019s LM part from larger text corpora or 1-shot \ufb01ne-tuning to new datasets with di\ufb00erent word sequence patterns. In order to address the problem of signi\ufb01cantly imbalanced training data (the majority of the words in the corpus are \ufb02uent), we use a weighted cross-entropy loss in which the weight of a data point is inversely proportional to its label\u2019s frequency in the training set. Our overall loss function is of the form: L = WLmain + \u03b1Llm + \u03bb 2 X i w2 i (6.1) where WLmain and Llm are respective losses for the dis\ufb02uency tagging (class-weighted) and language modelling tasks (LM loss coe\ufb03cient \u03b1 is tuned empirically). We use class weights in the main task\u2019s loss to deal with the highly imbalanced data (see Section 6.3 for an overview od the data), so that the weight of the kth class is calculated as Wk = 1/(Ck)\u03b3, where Ck is the number of kth class instances in the training set, and \u03b3 is a smoothing constant set empirically. The last term in Eq. 6.1 is the L2 regularisation which we apply to the model\u2019s weight parameters wi (those \fChapter 6. Spoken Dis\ufb02uency Detection 99 Figure 6.2: Statistics of the SWDA corpus of word embeddings, LSTM gates, and MLPs) leaving all the biases intact. The L2 coe\ufb03cient \u03bb is also tuned empirically. 6.3 The Switchboard Dialog Acts Dataset For training our model, we use the Switchboard Dialog Acts dataset (SWDA) with manually annotated dis\ufb02uency tags (Meteer et al., 1995). We use a pre-processed version of the dataset by Hough and Schlangen (2015) containing 90,497 utterances with transformed tagging: following their convention, there are 27 tags in total consisting of: tag for \ufb02uent tokens; for edit tokens; tags for repair tokens that determine the start of the reparandum to be n tokens/words back; and & tags which mark the end of the repair and classify whether the repair is a substitution or deletion repair. The latter tokens can be combined with tokens, which explains the total of 27 tags \u2014 see (6.2) for an example where the repair word, \u2018Spanish\u2019, is tagged as meaning this is a substitution repair that retraces 4 tokens back from the current token (see Figure 6.2). with \u27e8f/\u27e9 [Italian \u27e8f/\u27e9 | {z } reparandum + {uh \u27e8e/\u27e9no \u27e8e/\u27e9uh \u27e8e/\u27e9} | {z } interregnum Spanish] \u27e8rm\u22124\u27e9 \u27e8rpS ub\u27e9 | {z } repair cuisine \u27e8f/\u27e9 (6.2) The distribution of di\ufb00erent types of tokens is highly imbalanced: only about 4% of all tokens are involved in dis\ufb02uency structures (the detailed statistics are shown in the tables in the end of this chapter). See above, Section 6.2 for how our model deals with this. \fChapter 6. Spoken Dis\ufb02uency Detection 100 6.4 Dis\ufb02uency Detection Generalisation to bAbI+ To evaluate the out-of-dataset generalisation properties of our model and that of Hough and Schlangen (2015), we employ additional data which we generate using bAbI+ tools introduced in Chapter 3. bAbI+ augmentations can be mixed in with complete control over the syntactic and semantic contexts in which the phenomena appear, and therefore the bAbI+ environment allows controlled, focused experimentation of the e\ufb00ect of di\ufb00erent phenomena and their distributions on the performance of di\ufb00erent models. Here, we use bAbI+ tools to generate new data for the controlled generalisation experiment2 of what kinds of dis\ufb02uency phenomena are captured better by each model. We focus here on the following dis\ufb02uency patterns: \u2014 Hesitations, e.g. as in \u201cwe will be uhm eight\u201d (mixed in are single edit tokens); \u2014 Prepositional Phrase restarts (PP-restart), e.g. \u201cin a in a um in a moderate price range\u201d (repair of a PP at its beginning with or without an interregnum); \u2014 Clausal restarts (CL-restart), e.g. \u201ccan you make a restaurant uhm yeah can you make a restaurant reservation for four people with french cuisine in a moderate price range\u201d (repair of the utterance from the beginning starting at arbitrary positions); \u2014 Corrections (NP and PP), e.g. \u201cwith Italian sorry Spanish cuisine\u201d, as was initially discussed in Section 6.1. We generated independent bAbI+ datasets with each dis\ufb02uency type. The dis\ufb02uency phenomena above were chosen to resemble dis\ufb02uency patterns in the original SWDA corpus (see Tables 6.3, 6.5 for examples), as well as intuitive considerations for the phenomena relevant for goaloriented dialogue (namely, corrections). The intuition for a generalisation experiment with data like this is as follows: while having similar dis\ufb02uency patterns, our bAbI+ utterances di\ufb00er from SWDA in terms of the vocabulary and the word sequences themselves as they are in the domain of goal-oriented human-computer dialogue \u2014 this property makes it possible to evaluate the generalisation capabilities of a model outside its training domain. 6.5 Evaluation and Experimental Setup We employ exactly the same evaluation criteria as Hough and Schlangen (2015): micro-averaged F1-scores for edit (Fe) and tokens (Frm) as well as for whole repair structures (Frps). 2Data is available athttp://bit.ly/babi_plus_disfluencies_study \fChapter 6. Spoken Dis\ufb02uency Detection 101 Model Fe Frm Frps Hough and Schlangen (2015) 0.902 0.711 0.689 Schlangen and Hough (2017) 0.918 \u2014 0.719 LSTM 0.915 0.693 0.775 ULMFiT MT-LSTM 0.889 0.735 0.795 MT-LSTM 0.919 0.753 0.816 Table 6.1: Evaluation of the dis\ufb02uency tagging models on SWDA Model hesitations (Fe) PP restarts CL-restarts Fe Frm Frps Fe Frm Frps Hough and Schlangen (2015) 0.917 0.774 0.875 0.877 0.938 0.471 0.630 LSTM 0.956 1.000 0.982 0.993 0.948 0.360 0.495 ULMFiT MT-LSTM 0.902 0.917 0.937 0.963 0.958 0.456 0.637 MT-LSTM 0.910 1.000 0.993 0.997 0.991 0.484 0.659 Table 6.2: Controlled generalisation evaluation on the 3 bAbI+ datasets (one per each dis\ufb02uency phenomenon) We compare our Multitask LSTM (or MT-LSTM) model to its single-task version (dis\ufb02uency tag predictions only) as well as to the system of Hough and Schlangen (2015) and the joint dis\ufb02uency tagging/utterance segmentation model of Schlangen and Hough (2017) on all of the applicable word-level metrics on dialogue transcripts. These use a hand-crafted Markov Model for post-processing, whereas our model works in a streamlined single-stage way. Apart from that, we evaluate a version of our model that makes use of pre-trained contextual embeddings \u2014 namely, AWD-LSTM (Merity et al., 2018) trained on WikiText-103 dataset (Merity et al., 2017) following the ULMFiT technique (Howard and Ruder, 2018)3. Among the rest of widely-used pre-trained contextual embeddings (e.g. ELMo, BERT, GPT-2), AWDLSTM is a strictly left-to-right unidirectional model with the possibility of incremental wordby-word state updates which is an essential feature of all the models in this experiment. In order to make use of the pre-trained embeddings, we don\u2019t use combined word-POS tokens with this model and use separate LSTM encoders instead \u2014 the pre-trained one for words and a from-scratch one for POS-tags, respectively. We train our model using the SGD optimiser and monitor the Frm on the dev set as a stopping criterion. The model\u2019s hyperparameters are tuned heuristically, the \ufb01nal values are listed in the tables in the end of this chapter. 3We used a pre-trained model from fast.ai \u2014 code available at http://tiny.cc/ulmfit_disfluency \fChapter 6. Spoken Dis\ufb02uency Detection 102 Repair Freq. i i i 139 the the the 33 and and and 31 it it it 29 its its its 26 it was it was 67 i dont i dont 57 i think i think 44 in the in the 39 do you do you 23 a lot of a lot of 7 that was uh that was 5 it was uh it was 5 what do you what do you 4 i i dont i dont 4 Table 6.3: Most common repairs in SWDA of length 1\u20143 POS pattern Examples Freq., % DT NN DT NN this woman this socialite 0.10 a can a garage the school that school JJ NN JJ NN high school high school 0.03 good comedy good humor israeli situation palestinian situation DT UH DT NN that uh that punishment 0.02 the uh the cauli\ufb02ower that uh that adjustment DT NN UH DD NN a friend uh a friend 0.01 a lot uh a lot a lot um a lot NN PRP VBP NN NN ribbon you know hair ribbon 0.01 thing you know motion detector Table 6.4: SWDA repairs by POS-tag pattern 6.6 Results The results are shown in Table 6.1. Both singleand multitask LSTM are able to outperform the Hough and Schlangen (2015) model on edit tokens and repair structures, but the multitask one performs signi\ufb01cantly better on tags and surpasses both previous models. The reason Frps is higher than Frm in general is that due to the tag conversion, \ufb02uent tokens inside reparandums and repairs are treated as part of repair, and they contribute to the global positive and negative counters used in the micro-averaged F1. Large-scale pre-training, while potentially useful for better coverage of the target dataset, did not a\ufb00ect the \ufb01nal dis\ufb02uency detection accuracy as ULMFiT MT-LSTM\u2019s results show. The reason for that could be the initially large size of the underlying pre-trained AWD-LSTM: it contains 3 layers with 1024, 1024, and 400 neurons, respectively, which is signi\ufb01cantly more than what our best performing model uses (we also tried only using the \ufb01rst layer of the AWD-LSTM, which didn\u2019t result in any signi\ufb01cant accuracy improvement). Controlled generalisation experiment results are shown in Table 6.2 \u2014 note that we could only run the model of Hough and Schlangen (2015) on bAbI+ data because that of Schlangen and Hough (2017) works in a setup di\ufb00erent from ours. It can be seen that the LSTM tagger is somewhat over\ufb01tted to edit tokens on SWDA. This is the reason it outperforms the Multitask LSTM on the hesitations dataset and has a tied 1.0 on edit tokens on PP restarts dataset. In all other cases, Multitask LSTM demonstrates superior generalisation. As for NP/PP self-corrections which are not present in Table 6.2: none of the systems tested were able to handle these. Evaluation on this dataset revealed 0.0 accuracy with all systems. We discuss these results below. \fChapter 6. Spoken Dis\ufb02uency Detection 103 Keyword pattern Examples Freq., % sorry * or im sorry no 0.02 um im sorry what thank you im sorry i just got home from work sorry * and he told us theres two sixteen bit slots 0.009 and two eight bit sorry two four sixteen bit slots and two eight bit slots available for the user i mean * i mean 4 i mean yeah i mean uh i mean i i mean * i mean i i 0.5 but i mean whats whats happened here is is is i mean you youve Table 6.5: SWDA repairs by interregnum 6.7 Conclusion We have presented a multitask LSTM-based dis\ufb02uency detection model which outperforms previous neural network-based incremental models while being signi\ufb01cantly simpler than them. We have also demonstrated the generalisation potential of a dis\ufb02uency detection model by crossdataset evaluation. As the results show, all models achieve reasonably high generalisation level on the very local dis\ufb02uency patterns such as hesitations and PP restarts. However, the accuracy drops signi\ufb01cantly on less restricted restarts spanning arbitrary regions of utterances from the beginning. On the majority of the dis\ufb02uency patterns, our model achieves a superior generalisation level. Interestingly, none of the models were able to detect NP or PP corrections such as those often glossed in dis\ufb02uency papers (e.g. \u201cA \ufb02ight to Boston uh I mean to Denver\u201d). The most likely explanation for this could be the extreme sparsity of such dis\ufb02uencies in the SWDA dataset. We performed analysis of SWDA dis\ufb02uencies in order to explore this hypothesis and examined their distribution based on length in tokens and POS-tag sequence patterns of interest. As shown in Tables 6.3 and 6.4, the vast majority of dis\ufb02uencies found are just repetitions without speakers actually correcting themselves. This observation is in line with prior studies, showing that the distribution of repair types varies signi\ufb01cantly across domains (Colman and Healey, 2011), modalities (Oviatt, 1995), and gender & age groups (Bortfeld et al., 2001) \u2014 see Purver et al. (2018) for a nice discussion. While this is very likely the correct explanation, we cannot rule out the possibility that such self-corrections are inherently more di\ufb03cult to process for particular models \u2014that needs a separate training dataset that holds frequency of particular repair structures constant. \fChapter 6. Spoken Dis\ufb02uency Detection 104 Incremental phenomena of spontaneous spoken language are one aspect of natural input which dialogue systems should process robustly. Another natural phenomenon that a dialogue system can be exposed to is the presence of out-of-domain (OOD) utterances at the input. With most of the neural dialogue models\u2019 logic learned form data and not directly observable, it is often the case that the system\u2019s behaviour on unseen input is unpredictable which signi\ufb01cantly limits such system\u2019s practical usability. In the next chapter, we will continue our work on dialogue systems\u2019 robustness by addressing the problem of OOD user\u2019s input detection and handling in a predictable way, while keeping the overall training setup data-e\ufb03cient. \fChapter 7 Improving Out-of-Domain Robustness of Dialogue Systems In this chapter, we are going to continue with the problem of the dialogue systems\u2019 robustness to the natural and diverse user\u2019s input. In the previous chapter, we worked on improving robustness to surface variations of the user\u2019s input represented as the dis\ufb02uencies in the spoken language. Here, we are going to cover another kind of input that can be considered anomalous, especially for the systems trained in a data-e\ufb03cient way from a few example dialogues \u2014 out-of-domain (OOD) input, i.e. user queries that the system is unable to interpret and process correctly. We explore the problem of OOD robustness of dialogue systems and the associated to it trade-o\ufb00in accuracies on seen and unseen data. We present a new experimental testbed for studying the robustness of dialogue systems to OOD input, which is bAbI Dialog Task 6 (Bordes et al., 2017) augmented with OOD content in a controlled way. We then present turn dropout, a simple yet e\ufb03cient technique for training OODrobust dialogue systems based on automatic data augmentation and thus alleviating the problem of the dependence on real OOD data. With also propose a simple uni\ufb01ed way to train the target systems for OOD input detection as well as for their main task at the same time. Moreover, based on prior approaches to e\ufb03cient OOD input detection using autoencoders (see our discussion in Section 2.6.2), we propose a variant of Hybrid Code Network (HCN) dialogue management model (Williams et al., 2017) augmented with an autoencoder for contextual OOD input detection called AE-HCN(-CNN). We experiment with this model, as well as with a range of HCN-family models: Variational HCN (VHCN), Hierarchical HCN (HHCN), the original HCN \u2014 adapted for training with turn dropout, and demonstrate the resulting performance improvement as seen on bAbI Dialog Task 6 and Google multi-domain dialogue datasets (Shah et al., 2018). 105 \fChapter 7. Out-of-Domain Robustness 106 1 sys: Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you? 2 usr: I\u2019d like to get away from Monday, August 15, 2016 to Wednesday, August 31, 2016. My budget is $ 3200 and I would leave from Detroit. 3 sys: Sorry I didn\u2019t catch that. Could you please repeat? 4 usr: Will there be frost in Durham next week? 5 sys: Sorry I didn\u2019t catch that. Could you please repeat? 6 usr: So sorry man I want a moderately priced restaurant in the west part of town 7 sys: Table 7.1: Augmented dialogue example (OOD content in bold, segment-level in italics) 7.1 The Experimental Environment for OOD Robustness Studies In order to study the e\ufb00ect of OOD input on end-to-end dialogue system\u2019s performance, we created an experimental testbed based on (1) a dataset of real human-computer goal-oriented dialogues and (2) the data augmentation technique \u2014 bAbI tools (presented in Chapter 3) for mixing real user utterances from other domains into the main dataset in a controlled way. As our main dataset, we use bAbI Dialog Task 6 (Bordes et al., 2017) with real human-computer conversations in the restaurant search domain initially collected for the Dialog State Tracking Challenge 2 (Henderson et al., 2014a). Our OOD augmentations are as follows: \u2014 turn-level OOD: user requests from a foreign domain \u2014 the desired system behaviour for such input is the fallback action, \u2014 segment-level OOD: interjections in the user in-domain requests \u2014 treated as valid user input and is supposed to be handled by the system in a regular way. These two augmentation types re\ufb02ect a speci\ufb01c dialogue pattern of interest (see Table 7.1): \ufb01rst, the user utters a request from another domain at an arbitrary point in the dialogue (each turn is augmented with the probability pood_start), and the system answers accordingly. This may go on for several turns in a row \u2014 each subsequent turn is augmented with the probability pood_cont. Eventually, the OOD sequence ends up and the dialogue continues as usual, with a segment-level OOD of the user a\ufb03rming their mistake. For this study, we set pood_start to 0.2 and pood_cont to 0.41. While we introduce the OOD augmentations in a controlled programmatic way, the actual OOD content is natural. The turn-level OOD utterances are taken from conversational datasets in several foreign domains: \u2014 Frames dataset (Asri et al., 2017) \u2014 travel booking (1198 utterances), 1We experimented with other values of pood_start and pood_cont but did not see signi\ufb01cant di\ufb00erences in the results. Further experiments for di\ufb00erent domains are encouraged using the tools provided \fChapter 7. Out-of-Domain Robustness 107 Figure 7.1: Hybrid Code Network model family \u2014 Stanford Multi-Domain (SMD) Dialogues Dataset (Eric et al., 2017) \u2014 calendar scheduling, weather information retrieval, city navigation (3030 utterances), \u2014 Dialog State Tracking Challenge 1 (Williams et al., 2013) \u2014 bus information (968 utterances). In order to avoid incomplete/elliptical phrases, we only took the \ufb01rst user\u2019s utterances from the dialogues. For segment-level OOD, we mined utterances with the explicit a\ufb03rmation of a mistake from Twitter and Reddit Conversations datasets (e.g. \u201cmy mistake\u201d, \u201cI\u2019m so sorry\u201d) \u2014 701 and 500 utterances respectively. Our datasets, as well as the tools for the OOD-augmentation of arbitrary datasets of interest are openly available2. 7.2 OOD-Robust Dialogue Models 7.2.1 Hybrid Code Network Model Family In this chapter, we experiment with the Hybrid Code Network family of models (Williams et al., 2017) on bAbI Dialog Task 6 data (Bordes et al., 2017). HCN is reported to be the best performing model to date for the original, IND-only bAbI Dialog Task 6 data \u2014 thus, this is our primary experimental setup here. As shown in Figure 7.1, HCN considers a dialogue as a sequence of turns. At each turn, HCN takes a tuple (xt, at\u22121, st) as the input to produce the next system action3 at, where xt is a user 2See https://github.com/ishalyminov/ood_robust_hcn 3A system action can be either a text output or an api call. \fChapter 7. Out-of-Domain Robustness 108 utterance consisting of N tokens, i.e., xt = {xt,1, . . . , xt,N}, at\u22121 a one-hot vector encoding the previous system action, and st a contextual feature vector generated by domain-speci\ufb01c code. The user utterance is encoded as a concatenation of a bag-of-words representation and an average of word embeddings of the user utterance: ut = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0bow(xt); 1 N N X i=1 e(xi) \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (7.1) where e(\u00b7) denotes a word embedding layer initialised with GloVe (Pennington et al., 2014) or Google News-based Word2Vec (Mikolov et al., 2013) \u2014 frozen at the training time. HCN then considers the input tuple (ut, at\u22121, st) to update the dialogue state through an LSTM (Hochreiter and Schmidhuber, 1997): ht = LSTM (ht\u22121, [ut; at\u22121; st]) (7.2) Finally, a distribution over system actions is calculated by a dense layer with linear projection parameters W and b \u2014 the weight matrix and the bias vector, respectively \u2014 and with a softmax activation: P(at) = softmax(Wht + b) (7.3) Thus, HCN is a hierarchical dialogue management model with a turn-level and a dialogue-level components (we will call them both encoders). The turn-level encoder produces a latent representation of a single dialogue turn, and the dialogue-level encoder augments it with additional dialogue-level features. We will \ufb01rst explore a series of HCN variations di\ufb00ering in their turn-level encoder (and the corresponding optimisation objective) \u2014 see the illustration in Figure 7.1. The original HCN as described above. Its optimisation objective for a single prediction is the categorical cross-entropy with respect to the log-likelihood of the output dialogue action (here and in Eq. 7.7 we show maximisation objectives for simplicity. In the actual implementation, they are minimised with their sign reversed): LHCN = log p (at | x1, . . . , xt, at\u22121, st) (7.4) where at is the dialogue action, x1, . . . , xt is the dialogue context ending with the current user\u2019s turn, at\u22121 is the last system\u2019s action, and st is the domain-speci\ufb01c dialogue feature vector. Please note, losses of all our HCN models are de\ufb01ned with respect to the model parameters \u03b8HCN, i.e. \fChapter 7. Out-of-Domain Robustness 109 the parameters of the underlying LSTMs, embeddings, and projection layers, which we omit for visual simplicity. Hierarchical HCN (HHCN) uses an RNN (in our case an LSTM cell) in order to encode each utterance: uHHCN t = LSTM (e(xt)) (7.5) The optimisation objective is the same as of HCN. Variants of this model were described by Lee (2017) and Liang and Yang (2018). Variational HCN (VHCN) uses a Variational Autoencoder as the turn-level encoder, so that the resulting turn encoding is VAE\u2019s latent variable z (see also our discussion of latent variable models in Section 2.3.3): uVHCN t = \u00b5 (LSTM (e(xt))) + \u03c3 (LSTM (e(xt))) \u2217N(0, 1) (7.6) where \u00b5 and \u03c3 are MLPs for predicting z\u2019s posterior distribution parameters, and N(0, 1) is a sample from its prior distribution, a standard Gaussian (Bowman et al., 2016). This model di\ufb00ers from the previous two in the fact that it learns dialogue management and input autoencoding jointly. In order to keep the user\u2019s input reconstruction task less complex than the main one, we represent VAE\u2019s reconstruction targets as bags of words (BoW) instead of sequences \u2014 in that, we follow (Zhao et al., 2017). Thus, VHCN optimisation objective is as follows: LVHCN = Eq(z) \u0002log(p(at | x1, . . . , xt, at\u22121, st))\u0003 + Eq(z) h p(xBoW t | z) i \u2212KL (q(z | xt) || p(z)) (7.7) In the above formula, the \ufb01rst term is the main task\u2019s log-likelihood of the dialogue action at, the second one is the VAE\u2019s reconstruction term for the last user\u2019s input in the bag-of-words form xBoW t , and the last turn is KL-divergence between the prior and posterior distribution of the VAE\u2019s latent variable z \u2014 following Bowman et al. (2016), we compute it in a closed form. Another bene\ufb01t of the BoW loss is, as reported in Zhao et al. (2017), it helps keep the variational properties of the model (i.e. non-zero KL-term) without the necessity of using the KL-term annealing trick (Bowman et al., 2016) which is itself challenging to control in practice. Unlike the authors of the original BoW loss approximating the presence of each word in the reconstructed bag with a feed-forward neural network and then summing up each word\u2019s log-probability in the \ufb01nal loss, we use a simpler method and represent the BoW as a single vocabulary-sized vector \fChapter 7. Out-of-Domain Robustness 110 Figure 7.2: AE-HCN model architecture with k-hot values (1 for every word present in the bag, the rest of the elements being 0\u2019s), and use a single sigmoid cross-entropy loss for it. All the models above use the same dialogue-level LSTM encoder with additional features concatenated to the turn representations: BoW turn features, dialogue context features, and the previous system action4. 7.2.2 AE-HCN Finally, we introduce the two autoencoder-augmented architectures, AE-HCN & AE-HCNCNN, where the HCN model is aware of the AE\u2019s reconstruction score \u2014 together with the training method based on automatic input augmentation. AE-HCN is an HCN whose dialogue-level encoder takes an additional input for dialogue state update \u2014 speci\ufb01cally, the autoencoder\u2019s reconstruction score rt for the user\u2019s utterance (Figure 7.2): ht = LSTM (ht\u22121, [ut; at\u22121; st; rt]) (7.8) The autoencoder is a standard Seq2Seq model which projects a user utterance into a latent vector and reconstructs the user utterance. Speci\ufb01cally, the encoder reads xt using a GRU (Cho et al., 2014a) to produce a 512-dimensional hidden vector henc N which in turn gets linearly projected to 4Without the loss of the architecture generality, we have action mask vectors as the additional features for the dialogue-level LSTM (Williams et al., 2017), but they do not convey any information and are always set to 1\u2019s \fChapter 7. Out-of-Domain Robustness 111 a 200-dimensional latent vector z with the corresponding weight and bias parameters Wz and bz, respectively \u2014 see the formulas below: henc n = GRUenc \u0010 henc n\u22121, e(xn) \u0011 , 1 < n < N (7.9) z = Wzhenc N + bz (7.10) The output of the decoder y at step n is a distribution over words: Pdec(yn) = softmax \u0010 Wdechdec n + bdec \u0011 (7.11) hdec n = GRUdec \u0010 hdec n\u22121, e(yn\u22121) \u0011 (7.12) hdec 0 = Wdecz + bdec (7.13) where GRUdec has 512 hidden units. The reconstruction score rt is the normalised generation probability of xt (which is both the input and the output of the autoencoder): rt = PN n=0 log Pdec(xn) N (7.14) 7.2.3 AE-HCN-CNN AE-HCN-CNN is a variant of AE-HCN where user utterances are encoded using a CNN layer with max-pooling (following Kim, 2014) rather than equation 7.1: xt = Poolingmax (CNN(e(x1), ..., e(xn))) (7.15) The CNN layer considers two kernel sizes (2 and 3) and has 100 \ufb01lters for each kernel size. 7.3 Training with Turn Dropout In order to train a system robust to OOD in the absence of real OOD examples, we employ a negative sampling-based approach and generate them synthetically from the available IND data with a technique we call turn dropout. Namely, we replace random dialogue turns with the synthetic ones, and assign them the fallback action. \fChapter 7. Out-of-Domain Robustness 112 Hyperparameter HCN HHCN VHCN Embedding size 64 128 128 Latent variable size \u2014 \u2014 8 Turn dropout ratio 0.4 0.6 0.3 Learning rate 0.001 (all models) Early stopping threshold 20 epochs (all models) Optimiser Adam (all models) Word dropout ratio 0.2 (all models) Table 7.2: Model hyperparameters bAbI Dialog Task 6 bAbI Dialog Task 6 + OOD Overall acc. Overall acc. Seg. OOD acc. OOD acc. OOD F1 HCN 0.557 0.438 0.455 0.0 0.0 HHCN 0.531 0.418 0.424 0.0 0.0 VHCN 0.533 0.413 0.413 0.0 0.0 TD-HCN 0.563 0.575 0.257 0.754 0.743 TD-HHCN 0.505 0.455 0.435 0.274 0.418 TD-VHCN 0.565 0.545 0.407 0.530 0.667 Table 7.3: Evaluation results 7.3.1 TD-HCN More formally, our dialogue features are as follows: \u27e8f_turn, f_ctx, f_mask, a\u27e9, i.e. turn features (token sequences), dialogue context features, action masks, and target actions respectively. Under turn dropout, for a randomly selected dialogue i and its turn j, we replace f_turni j with a sequence of random vocabulary words (drawn from a uniform distribution over the vocabulary) and UNK tokens, and corresponding ai j with the fallback action, and leave all other features intact. In this way, we\u2019re simulating anomalous turns for the system given usual contexts (as stored in the dialogue RNN\u2019s state), and we put minimum assumptions on the synthesised turns\u2019 structure (we only limit their lengths to be within the bounds of real utterances). 7.3.2 Training the AE-HCN(-CNN) While essentially similar to the TD-HCN algorithm described above, training these two architectures with data augmentation involves providing reconstruction scores for the \u2018dropped out\u2019 turns. We describe the full procedure below. To endow an AE-HCN(-CNN) model with a capability of detecting OOD utterances and producing fallback actions without requiring real OOD data, we augment training data with counterfeit \fChapter 7. Out-of-Domain Robustness 113 Dataset # Dialogues Avg. #turns per dialogue # actions bAbI6 train 1618 20.08 58 bAbI6 dev 500 19.30 58 bAbI6 test 1117 22.07 58 bAbI6 test-OOD 1117 27.27 59 GR train 1116 9.07 247 GR dev 349 6.53 247 GR test 775 6.87 247 GR test-OOD 775 9.01 248 GM train 362 8.78 194 GM dev 111 9.14 194 GM test 252 8.73 194 GM test-OOD 252 11.25 195 Table 7.4: bAbI6, GR, and GM dataset statistics turns. We \ufb01rst select arbitrary turns in a dialogue at random according to a counterfeit OOD probability \u03c1, and insert counterfeit turns before the selected turns. A counterfeit turn consists of a tuple (xt, at\u22121, st, rt) as input and a fallback action at as output. We copy at\u22121 and st of each selected turn to the corresponding counterfeit turns since OOD utterances do not a\ufb00ect previous system action and feature vectors generated by domain-speci\ufb01c code. Now we generate a counterfeit xt and rt. Since we do not know OOD utterances a priori, we randomly choose one of the user utterances of the same dialogue to be xt. This helps the model learn to detect OOD utterances because a random user utterance is contextually inappropriate just like OOD utterances are. We generate rt by drawing a sample from a uniform distribution, U[\u03b1, \u03b2], where \u03b1 is the maximum reconstruction score of training data and \u03b2 is an arbitrary large number. The rationale is that the reconstruction scores of OOD utterances are likely to be larger than \u03b1 but we do not know what distribution the reconstruction scores of OOD turns would follow. Thus we choose the most uninformed distribution, i.e. a uniform distribution so that the model may be encouraged to consider not only reconstruction score but also other contextual features such as the appropriateness of the user utterance given the context, changes in the domain-speci\ufb01c feature vector, and what action the system previously took. 7.4 Experiment 1: HHCN & VHCN We train our models only using the original bAbI Dialog Task 6 dataset, and evaluate them on our OOD-augmented versions of it: we use the per-utterance accuracy as our main evaluation metric; the models are trained with the same hyperparameters (where applicable) listed in Table 7.2. The models use the common uni\ufb01ed vocabulary including all words from our datasets (including OOD content). The intuition behind this is as follows: production dialogue models \fChapter 7. Out-of-Domain Robustness 114 often use word embedding matrices with vocabularies signi\ufb01cantly exceeding that of the training data in order to take advantage of additional generalisation power via relations like synonymy, hyponymy, or hypernymy which are e\ufb03ciently modelled by distributed word representations. Therefore, simply mapping every unseen word to an \u2018UNK\u2019 does not quite re\ufb02ect that setting. We tuned our models\u2019 hyperparameters using a 2-stage grid search, tracking the development set accuracy. At the \ufb01rst stage, we adjusted the embedding dimensionality of our models (and the latent variable size in case of VHCN). Then, given the values found, at the second stage we adjusted turn dropout ratio at the interval [0.05 \u22120.7]. Exact hyperparameter values are detailed in Table 7.2. The results are shown in Table 7.3 \u2014 please note, apart from the accuracies we report OOD F1-measure, a metric showing the model\u2019s performance as a conventional OOD detector, with positive class being the fallback action, and the negative being all the IND classes actions. Finally, given the stochastic nature of VHCN, we reported its mean accuracy scores over 3 runs (we used the same criterion for selecting the best model during the training procedure). 7.4.1 Results and Discussion As our experiment showed, while learning to handle both IND and OOD input with access to IND-only data at the training time, there appears the following trade-o\ufb00: a model performing better on the \u2018clean\u2019 test turns is prone to lower accuracy on OOD \u2014 it can be said that it slightly over\ufb01ts to its devset. On the other hand, a model regularised with turn dropout during training naturally performs better on unseen OOD turns, but with not as high accuracy on its \u2018clean\u2019, IND test data. Another side of the trade-o\ufb00is the accuracy of OOD detection vs robust handling of IND input with segment-level noise. As our results showed, the models speci\ufb01cally trained for OOD detection all demonstrate lower accuracy on the noisy IND. Among the models we evaluated, it is worth noting that the original HCN demonstrated the best performance as an OOD detector (more than 74% F1-score) and thus overall IND + OOD accuracy on the augmented dataset \u2014 more than 57%. While some parts of its architecture (e.g. mean vector-based turn encoding or bag-of-words feature vector at the utterance level) may not seem to be the most robust solution, the model demonstrate superior overall performance. Averaging at the turn level instead of recurrent encoding (the case of HHCN and VHCN) makes the model less dependent on actual word sequences seen during training but on the keywords themselves. In turn, VHCN demonstrated superior performance on IND data when trained with turn dropout, more than 56% \u2014 it bene\ufb01ted in terms of both overall accuracy and the absence of falsepositive OODs thus improving upon the original HCN\u2019s results Williams et al. (2017). An \fChapter 7. Out-of-Domain Robustness 115 bAbI6 GR GM Test Test-OOD Test Test-OOD Test Test-OOD P@1 P@1 OOD F1 P@3 P@3 OOD F1 P@3 P@3 OOD F1 HCN 53.41 41.95 0 58.89 41.65 0 41.18 27.08 0 AE-HCN-Indep 31.29 41.06 48.68 51.90 55.42 71.52 31.12 42.78 64.35 AE-HCN 53.58 55.04 73.41 56.97 58.90 74.67 40.61 48.59 69.31 AE-HCN-CNN 55.04 55.35 70.38 58.32 64.51 81.33 45.12 52.79 68.59 Table 7.5: AE-HCN(-CNN) Evaluation results additional challenge was to train it while keeping its variational properties (i.e. reasonably high KL term) \u2014 the BoW reconstruction loss which we used in order to simplify the secondary task, helped with this as well (Zhao et al., 2017). On the other hand, while achieving superior performance on clean data, VHCN\u2019s properties did not result in OOD handling improvements. 7.5 Experiment 2: AE-HCN(-CNN) To study the e\ufb00ect of OOD input on these two dialogue systems\u2019 performance, we use three task-oriented dialogue datasets: bAbI6 as in the previous experiment, as well as GR and GM taken from Google multi-domain dialogue datasets (Shah et al., 2018). Basic statistics of the datasets are shown in Table 7.4. bAbI6 deals with restaurant \ufb01nding tasks, GM buying a movie ticket, and GR reserving a restaurant table, respectively. We generated distinct action templates by replacing entities with slot types and consolidating based on dialogue act annotations. We augment test datasets (denoted as Test-OOD in Table 7.4) with the procedure described in Section 7.1. 7.5.1 Experimental Setup and Evaluation We comparatively evaluate four di\ufb00erent models: 1. an HCN model trained on in-domain training data; 2. an AE-HCN-Indep model which is the same as the HCN model except that it deals with OOD utterances using an independent autoencoder-based rule to mimic Ryu et al. (2017) \u2014 when the reconstruction score is greater than a threshold, the fallback action is chosen (we set the threshold to the maximum reconstruction score of training data); 3. an AE-HCN(-CNN) model trained on training data augmented with counterfeit OOD turns \u2014 the counterfeit OOD probability \u03c1 is set to 15% and \u03b2 to 30. \fChapter 7. Out-of-Domain Robustness 116 Threshold Precision@1 OOD F1 6 40.39 48.38 7 42.56 50.46 8 43.69 51.08 9 52.21 63.86 10 47.27 44.44 Table 7.6: Performances of AE-HCN-Indep on bAbI6 Test-OOD with di\ufb00erent thresholds bAbI6 Test bAbI6 Test-OOD OOD Rate Precision@1 Precision@1 OOD F1 5% 55.25 55.48 69.72 10% 55.08 57.29 74.73 15% 55.04 55.35 70.38 20% 53.48 56.53 75.55 25% 53.72 56.66 73.13 30% 54.87 56.02 71.44 Table 7.7: AE-HCN-CNN performance on bAbI6 with varying counterfeit OOD rates Our training setup overall follows that in Experiment 1 (see Table 7.2). Although here, we pretrain the autoencoder on in-domain training data and keep it \ufb01xed while training the main models. The result is shown in Table 7.5. Since there are multiple actions that are appropriate for a given dialogue context, we use per-utterance Precision@K as performance metric. We also report F1-score for the OOD detection to measure the balance between precision and recall. The performances of HCN on Test-OOD are about 15 points down on average from those on Test, showing the detrimental impact of OOD utterances to such models only trained on in-domain training data. AE-HCN(-CNN) outperforms HCN on Test-OOD by a large margin \u2014 about 17(20) points on average \u2014 while keeping the minimum performance trade-o\ufb00compared to Test. Interestingly, AE-HCN-CNN has even better performance than HCN on Test, indicating that, with the CNN encoder, counterfeit OOD augmentation acts as an e\ufb00ective regularisation. In contrast, AE-HCN-Indep failed to robustly detect OOD utterances, resulting in much lower numbers for both metrics on Test-OOD as well as hurting the performance on Test. This result indicates two crucial points: 1. the inherent di\ufb03culty of \ufb01nding an appropriate threshold value without actually seeing OOD data; 2. the limitation of the models which do not consider context. \fChapter 7. Out-of-Domain Robustness 117 Figure 7.3: Histograms of AE reconstruction scores for the bAbI6 test data For the \ufb01rst point, Figure 7.3 plots histograms of reconstruction scores for IND and OOD utterances of bAbI6 Test-OOD (the histograms for other datasets follow similar trends). If OOD utterances had been known a priori, the threshold should have been set to a much higher value than the maximum reconstruction score of IND training data (6.16 in this case). For the second point, Table 7.6 shows the search for the best threshold value for AE-HCNIndep on the bAbI6 task when given actual OOD utterances (which is highly unrealistic for the real-world scenario). Note that the best performance achieved at 9 is still not as good as that of AE-HCN(-CNN). This implies that we can perform better OOD detection by jointly considering other context features. Furthermore, we conduct an experiment of AE-HCN\u2019s sensitivity to the \u03b2 hyperparameter which is the upper bound of the counterfeit reconstruction scores for turn dropout training. We vary \u03b2 from 30 which is quite close to \u03b1 (the corresponding lower reconstruction score bound determined by the statistics of the trainset), up to 240 which is more that 10\u03b1. The results are shown in Table 7.8. As we observe in the table, there is no dependence between the overall accuracy of the model and \u03b2 parameter, however the OOD detection F1-score kept increasing as the range of reconstruction scores fed into the model was getting broader, and nearly stopped as \u03b2 neared 10\u03b1. Therefore, it makes sense to perform a grid search over \u03b2 while training the model. Finally, we conduct a sensitivity analysis by varying counterfeit OOD probabilities. Table 7.7 shows performances of AE-HCN-CNN on bAbI6 Test-OOD with di\ufb00erent \u03c1 values, ranging from 5% to 30%. The result indicates that our method manages to produce good performance \fChapter 7. Out-of-Domain Robustness 118 \u03b1 \u03b2 Precision@1 OOD F1 23 30 59.3 75.3 23 60 57.8 75.5 23 90 56.9 75.6 23 120 59.3 75.9 23 150 58.3 76.1 23 180 58.0 76.2 23 210 57.0 75.7 23 240 59.1 76.0 Table 7.8: AE-HCN sensitivity to the \u03b2 hyperparameter on bAbI6 OOD-augmented testset regardless of the \u03c1 value. This superior stability nicely contrasts with the high sensitivity of AE-HCN-Indep with regard to threshold values as shown in Table 7.6. 7.6 Conclusion In this chapter, we explored the problem of robustness of neural dialogue systems to OOD input. Speci\ufb01cally, we presented a dataset for studying this problem along with a general procedure for augmenting arbitrary datasets of interest for such purpose. Secondly, we introduced turn dropout, a simple yet e\ufb03cient technique for improving OOD robustness of dialogue control models and evaluated its e\ufb00ect on several Hybrid Code Network-family models. We proposed a novel OOD detection method that does not require OOD data without any restrictions by utilising counterfeit OOD turns in the context of a dialogue. In the presence of OOD utterances, our method outperforms the best performing dialogue models to date equipped with an OOD detection mechanism by a large margin \u2014 more than 17 points in Precision@K on average \u2014 while minimising performance trade-o\ufb00on in-domain test data. The detailed analysis sheds light on the di\ufb03culty of optimising context-independent OOD detection and justi\ufb01es the necessity of context-aware OOD handling models. This chapter concludes our contributions to data-e\ufb03ciency of goal-oriented systems. In the next chapter, we are going to explore the area of social, chat-oriented dialogue as well as the data e\ufb03ciency issues arising there. As such, in Chapters 4 and 5, while working on bootstrapping dialogue systems with minimum training data, we also cared about being able to train our systems without the need for any annotations. In the next chapter, we are going to pursue a similar goal in the chat-oriented dialogue area where the conversation models themselves or their key components are dependent on the annotations in the form of the user feedback scores. \fChapter 8 Data-E\ufb03ciency in Social Dialogue In this \ufb01nal chapter, we are going to go beyond slot-\ufb01lling goal-oriented conversations and look at open-domain social dialogue. Chat-oriented dialogue di\ufb00ers from goal-oriented in the sheer fact that there is no \u2018goal\u2019 to pursue. That makes the objectives of such interaction harder to de\ufb01ne, and the conventional slot-value annotations widely used in goal-oriented dialogue become of little relevance in the chat-oriented setup. That is why large-scale conversational models (e.g. Vinyals and Le, 2015) were initially trained end-to-end from raw dialogue transcripts essentially to mimic the responses seen in the contexts they occur. However, in order to improve those models\u2019 base performance, especially in cases of user-faced products, annotations were brought back (e.g. Yu et al., 2016) \u2014 their simplest form being dialogue-level user ratings of the conversation, e.g. binary \u2018good/bad\u2019 or Likert-scale 1\u20145 scores. As was discussed in Chapter 2, both generation and retrieval-based models were widely used for end-to-end chat-oriented dialogue. Under the prominent approach to such systems used in practical applications (such as Amazon Alexa Prize) \u2014 the bot ensemble (Serban et al., 2017a; Yu et al., 2016; Song et al., 2016) \u2014 a collection, or ensemble, of di\ufb00erent bots is used, each of which proposes a candidate response to the user\u2019s input, and a response ranker selects the best response for the \ufb01nal system output to be uttered to the user. In this chapter, we focus on the task of \ufb01nding the best supervision signal for training a response ranker for ensemble systems. Our contribution is twofold: \ufb01rst, we present a neural ranker1 for ensemble-based dialogue systems and evaluate its level of performance using an annotation type which was provided to the Alexa Prize 2017 participants by Amazon (Ram et al., 2017), namely per-dialogue user ratings. Secondly and most importantly, we explore an alternative way of assessing social conversations simply via their length, thus removing the need for any user-provided ratings. 1Code and trained models available at http://tiny.cc/alana_ranker 119 \fChapter 8. Data-E\ufb03ciency in Social Dialogue 120 8.1 Data E\ufb03ciency in Open-Domain Dialogue Chatbots, or socialbots, are dialogue systems aimed at maintaining an open-domain conversation with the user spanning a wide range of topics, with the main objective of being engaging, entertaining, and natural. Currently, social chat systems are in great demand in industry both as a means to increase user retention with existing goal-oriented dialogue systems (e.g. Apple Siri, Google Assistant, or Amazon Alexa) and as standalone conversational systems for use e.g. in entertainment, robotics, and healthcare. With the objective described above, it is problematic to build an open-domain, social dialogue system in a traditional, rule-based way (Weizenbaum, 1966) because hand-crafting such systems is not practical \u2014 they are di\ufb03cult to create and maintain, cannot be trained on new data for a new setting, and cannot be automatically optimised. This is why many such systems are based on learning dialogue behaviour from massive amounts of data such as Reddit conversations (AlRfou et al., 2016; Liu et al., 2017), OpenSubtitles (Lison and Tiedemann, 2016) or the Ubuntu Dialog Corpus (Lowe et al., 2015). There are two fundamental approaches to such systems. Generation-based models are predominantly based on Seq2Seq architecture (see Section 2.2.2) and produce responses word-by-word in a language model style given an encoded dialogue context representation. Ranking (or retrieval) based models work similarly to an Information Retrieval engine (see Section 2.2.1): given a (preprocessed) user utterance as input, they \ufb01rst collect a pool of candidate responses, and then use their own ranking function in order to select the best one. Common sources of candidate responses are normally search engines over conversational corpora (e.g. the abovementioned Reddit conversations and OpenSubtitles), rule-based systems, question answering systems, or other response generation models. Ranking-based dialogue models are especially suitable for the ensembles of bots (Serban et al., 2017a; Yu et al., 2016; Song et al., 2016) which we are focusing on here. 8.1.1 The Need for Data E\ufb03ciency It is well known that deep learning models are highly data-dependent, but there are currently no openly available data sources which can provide enough high-quality open-domain social dialogues for building a production-level socialbot. Therefore, a common way to get the necessary data is to collect it on a crowdsourcing platform (Krause et al., 2017). Based on the model type and the development stage, it may be necessary to collect either whole dialogues, or some form of human feedback on how good a particular dialogue or turn is. However, both kinds of data are time-consuming and expensive to collect. The data e\ufb03ciency of a dialogue model can be split into two parts accordingly: \fChapter 8. Data-E\ufb03ciency in Social Dialogue 121 Variables Pearson corr. coe\ufb03cient rating/length 0.11 rating/positive feedback 0.11 rating/negative feedback 0.04 length/positive feedback 0.67 length/negative feedback 0.49 Table 8.1: Correlation study of key dialogue aspects \u2014 sample e\ufb03ciency \u2014 the number of data points needed for the model to train. As such, it is useful to specify an order of magnitude of the training set size for di\ufb00erent types of machine learning models; \u2014 annotation e\ufb03ciency \u2014 the amount of annotation e\ufb00ort needed. For instance, traditional goal-oriented dialogue system architectures normally require intent, slot-value, and dialogue state annotation (e.g. Young et al., 2010), whereas end-to-end conversational models work simply with raw text transcriptions (e.g. Vinyals and Le, 2015). 8.1.2 Users\u2019 Ratings and Explicit Feedback The 2017 Alexa Prize challenge made it possible to collect large numbers of dialogues between real users of Amazon Echo devices and various chatbots. The only annotation collected was per-dialogue ratings elicited at the end of the conversations by asking the user \u201cOn a scale of 1 to 5, how much would you like to speak with this bot again\u201d (Venkatesh et al., 2017). Less than 50% of conversations were actually rated; the rest were quit without the user giving a score. In addition, note that a single rating is applied to an entire conversation (rather than individual turns), which may consist of very many utterances. The conversations in the challenge were about 2.5 minutes long on average, and about 10% of conversations were over 10 minutes long (Ram et al., 2017) \u2014 this makes the ratings very sparse. Finally, the ratings are noisy \u2014 some dialogues which are clearly bad can get good ratings from some users, and vice-versa (see a motivating example in Figure 8.12). Apart from users\u2019 ratings, we employed an additional metric of dialogue quality for our study \u2014 explicit user feedback. That is, we searched for dialogue turns containing positive or negative user\u2019s sentiment. Additionally, we used a whitelist and a blacklist of hand-picked phrases to \ufb01lter out sentiments not addressed at the system directly. In total, we collected 605 unique utterances, e.g. \u201cthat\u2019s pretty cool\u201d, \u201cyou\u2019re funny\u201d, \u201cgee thanks\u201d, \u201cinteresting fact\u201d, \u201cfunny alexa you\u2019re funny\u201d and \u201cweird\u201d, \u201csounds dumb\u201d, \u201cstrange\u201d. 2Due to the restrictions on publicly presenting real conversational data from the challenge, we provide a sketch of a dialogue which we had with the system ourselves \fChapter 8. Data-E\ufb03ciency in Social Dialogue 122 Figure 8.1: Alana architecture, with an example chat Given the main objective of social dialogue stated in the Alexa Prize rules as \u2018long and engaging\u2019 conversation, we tried to verify an assumption that user ratings re\ufb02ect these properties of the dialogue. Apart from our observations above, we performed a correlation analysis of user ratings and aspects of dialogue directly re\ufb02ecting the objective: dialogue length and explicit user feedback (see Table 8.1). Although we have a signi\ufb01cant number of dialogues which are both long and highly rated, the correlation analysis was not able to show any relationship between dialogue length and rating. Ratings aren\u2019t correlated with user feedback either (see Section 8.5 for the details of user feedback collection). On the other hand, we found a promising moderate correlation between the conversation length and explicit positive feedback from the users (we counted the number of such turns in a dialogue). The respective length/negative feedback relationship is slightly weaker. Therefore, we experiment with conversation length for approximating user satisfaction and engagement and use it as an alternative measure of dialogue quality. This allows us to take advantage of all conversations, not just those rated by users, for training a ranker. While some \fChapter 8. Data-E\ufb03ciency in Social Dialogue 123 Figure 8.2: Neural ranker architecture conversations might be long but not engaging (e.g. if there are a lot of misunderstandings, corrections, and speech recognition errors), training a ranker only using length makes it extremely annotation-e\ufb03cient. 8.2 A Neural Ranker for Open-Domain Conversation The ranker described here is part of Alana, Heriot-Watt University\u2019s Alexa Prize 2017 \ufb01nalist socialbot (Papaioannou et al., 2017) \u2014 see the visualisation in Figure 8.1. Alana is an ensemblebased model incorporating information-retrieval-based bots with news content and information on a wide range of topics from Wikipedia, a question answering system, and rule-based bots for various purposes, from amusing users with fun facts to providing a consistent persona. The rule-based bots are also required to handle sensitive issues which can be raised by real users, such as medical, \ufb01nancial, and legal advice, as well as profanities. \fChapter 8. Data-E\ufb03ciency in Social Dialogue 124 8.2.1 Ranker Architecture The architecture of our ranker3 is shown in Figure 8.2. The inputs to the model are 1-hot wordby-word vectors of a candidate response and the current dialogue context (we use the 3 most recent system and user turns). They are encoded into a latent representation using a single shared RNN encoder based on GRU cells (Cho et al., 2014b). The context embedding vectors are then summed up and concatenated with the response embedding (Eq. 8.1): Enc(C, r) = X i RNN(Ci) \u2295RNN(r) (8.1) where C is the dialogue context and r is a response candidate. The context and the response are represented using combined word-agent tokens (where agent is either a speci\ufb01c bot from the ensemble or the user) and are concatenated with the lists of named entities extracted using Stanford NER (Finkel et al., 2005). All the word-agent tokens and named entities share the same uni\ufb01ed vocabulary. Encoder outputs, along with additional dialogue features such as context and response sentiment, timestamp, and bot names in the context and the response, go into the Predictor, a feed-forward neural network (MLP) whose output is the resulting rating (Eq. 8.2): Pred(C, r) = \u03c3 (L (Sem(C, r) \u2295f(C, r))) (8.2) where: L(x) = ReLU(Mx + b) is the layer used in the Predictor (the number of such layers is a model parameter), Sem = L (Enc(C, r)) is the vector of semantic context-response features, and f(C, r) is a vector of the additional dialogue features listed above. We use ReLU activation for the hidden layers because it is known to be highly e\ufb03cient with deep architectures (Nair and Hinton, 2010). Finally, we use sigmoid activation \u03c3 for generating the \ufb01nal prediction in the range [0, 1]. 3We would like to point out that technically, our model produces a relevance score for a response candidate given the dialogue context, trained with a regression objective and only using training data with a partial order relation imposed (i.e. 1.0 for relevant, 0.0 for non-relevant context-response pairs). Therefore, the model can as well be considered a conversational response re-scorer. However, here we conform to the notation of Liu (2011) under which our approach is categorised as point-wise ranking model with the judgements in the form of relevance degrees (0/1). \fChapter 8. Data-E\ufb03ciency in Social Dialogue 125 8.2.2 Training Method We use either dialogue rating or length as the prediction target scaled at [0, 1] (as discussed in Sections 8.4 and 8.5). The model is trained to minimise the Mean Squared Error (MSE) loss against the target using the Adagrad optimiser (Duchi et al., 2011). In our training setup, the model learns to predict per-turn target values. However, since only per-dialogue ones are available in the data, we use the following approximation: the target value of a context-response pair is the target value of the dialogue containing it. The intuition behind this is an assumption that the majority of turns in \u201cgood\" dialogues (either lengthor rating-wise) are \u201cgood\" in their local contexts as well \u2014 so that given a large number of dialogues, the most successful and unsuccessful turns will emerge from the corresponding dialogues. 8.3 Baselines We compare our neural ranker to two other models also developed during the competition: handcrafted and linear rankers \u2014 all three were deployed live in the Alana Alexa Prize 2017 \ufb01nalist system (Papaioannou et al., 2017), and were therefore of su\ufb03cient quality for a production system receiving thousands of calls per day. We also compare our model to a recently published dual-encoder response selection model by Lu et al. (2017) based on an approach conceptually close to ours. 8.3.1 Handcrafted Ranker In the handcrafted approach, several turn-level and dialogue-level features are calculated, and a linear combination of those feature values with manually adjusted coe\ufb03cients is used to predict the \ufb01nal ranking. The list of features includes: \u2014 coherence, information \ufb02ow, and dullness as de\ufb01ned by Li et al. (2016b); \u2014 overlap between the context and the response with regards to named entities and noun phrases; \u2014 topic divergence between the context turns and the response \u2014 topics are represented using the Latent Dirichlet Allocation (LDA, Ho\ufb00man et al., 2010); \u2014 sentiment polarity, as computed by the NLTK Vader sentiment analyser (Gilbert and Hutto, 2014).4 4http://www.nltk.org/howto/sentiment.html \fChapter 8. Data-E\ufb03ciency in Social Dialogue 126 8.3.2 Linear Ranker The linear ranker is based on the VowpalWabbit (VW) linear model (Agarwal et al., 2014). VW has a highly e\ufb03cient implementation of stochastic gradient descent over various loss functions, capable of handling a very large (and sparse) feature space and a high number of training examples. We use the MSE loss function and the following features in our VW ranker model: \u2014 bag-of-n-grams from the dialogue context (preceding 3 utterances) and the response, \u2014 position-speci\ufb01c n-grams at the beginning of the context and the response (\ufb01rst 5 positions), \u2014 dialogue \ufb02ow features (Li et al., 2016b), the same as for the handcrafted ranker, \u2014 bot name, from the set of bots in the ensemble. VW implements feature combinations (Cartesian product) out-of-the-box, which allows it to naturally include a combination of n-gram features from the context and the response, as well as others \u2014 the detailed VW ranker con\ufb01guration is shown in Table D.2 of Appendix D. 8.3.3 Dual-Encoder Ranker The closest architecture to our neural ranker is that of Lu et al. (2017), who use a dual-encoder LSTM with a predictor MLP for task-oriented dialogue in closed domains. Unlike this work, they do not use named entities, sentiment, or other input features than basic word embeddings. Dialogue context is not modelled explicitly either, and is limited to a single user turn. We reproduced their architecture and set its parameters to the best ones reported in the original paper. 8.4 Training Data Our data is transcripts of conversations between our socialbot and real users of the Amazon Echo collected over the challenge period, February \u2013 December 2017. The dataset consists of over 200,000 dialogues (5,000,000+ turns) from which over 100,000 dialogues (totalling nearly 3,000,000 turns) are annotated with ratings. From this data, we sampled two datasets of matching size for training our rankers, using the per-turn target value approximation described in Section 8.2.2 \u2014 the Length and Rating datasets for the respective versions of rankers. The target values (length/rating) in both sets are normalised into the [0, 1] range, and the Length set contains context-response pairs from long dialogues (target value above 0.7) as positive \fChapter 8. Data-E\ufb03ciency in Social Dialogue 127 instances and context-response pairs from short dialogues (target value below 0.3) as negative ones. With the same selection criteria, the Rating set contains context-response pairs from highly rated dialogues (ratings 4 and 5) as the positive instances and context-response pairs from lowrated dialogues (ratings 1 and 2) as the negative ones. Each dataset contains 500,000 instances in total, with equal proportion of positive and negative instances. We use a 8:1:1 split for training, development, and test sets. Prior to creating both datasets, we \ufb01ltered out of the dialogue transcripts all system turns which cannot be treated as natural social interaction (e.g. a quiz game) as well as the outliers (interaction length \u226595th percentile or less than 3 turns long).5 Thresholds of 0.3 and 0.7 were set heuristically based on preliminary data analysis. On the one hand, these values provide contrastive-enough ratings (e.g. we are not sure whether the rating in the middle of the scale can be interpreted as negative or positive). On the other hand, they allow us to get enough training data for both Length and Rating datasets.6 8.5 Evaluation and Experimental Setup In order to tune the neural rankers, we performed a grid search over the shared encoder GRU layer size and the Predictor topology.7 The best con\ufb01gurations are determined by the loss on the development sets. For evaluation, we used an independent dataset. 8.5.1 Evaluation Based on Explicit User Feedback At the evaluation stage, we check how well the rankers can distinguish between the good responses and the bad ones. The criterion for \u2018goodness\u2019 that we use here is chosen to be independent from both training signals. Speci\ufb01cally, we collected an evaluation set composed of dialogue turns followed by explicit positive feedback from the users which was described earlier in Section 8.1.2. Our \u2018bad\u2019 response candidates are randomly sampled across the dataset. Please note: while we use \u27e8positive feedback based response, random response\u27e9pairs to evaluate the ranker, the opposite case, i.e. \u27e8negative feedback based response, random response\u27e9is not useful for the evaluation. The reason for that is, having \u2018gold\u2019 bad system responses (followed by the negative user feedback), it is not feasible to \ufb01nd the good alternatives to them in their exact contexts via random sampling. Due to the extremely high scarcity of the contexts, all the randomly sampled responses will most likely be bad. 5Some extremely long dialogues are due to users repeating themselves over and over, and so this \ufb01lter removes these bad dialogues from the dataset. Dialogues less than 3 turns long are often where the user accidentally triggered the chatbot. These outliers amounted to about 14% of our data. 6Using more extreme thresholds did not produce enough data while less extreme ones did not provide adequate training signal. 7We tested GRU sizes of 64, 128, 256 and Predictor layers number/sizes of [128], [128, 64], [128, 32, 32]. \fChapter 8. Data-E\ufb03ciency in Social Dialogue 128 Context Usr no Sys i would love to talk about the news or maybe science, how about you? Usr let\u2019s talk about the news Good response Sys I read this somewhere When will Hillary Clinton break her silence on Rick and Morty\u2019s Szechuan sauce? Shall I say some more about this? Bad response Sys Hiya! What do you want to talk about? Context Usr darth vader Sys anakin skywalker the character in star wars episode iii revenge of the sith Usr yeah that\u2019s him Good response Sys Oh, regarding darth vader, I read this Star Wars: Darth Vader 7 Reveals Another Jedi Who Survived the Clone Wars. Shall I say some more about this? Bad response Sys Ok, I think you were talking about airplane, I heard that, An airplane or aeroplane is a powered, \ufb01xed-wing aircraft that is propelled forward by thrust from a jet engine or propeller. Do you want to know more about airplane? Context Usr reading books same here Sys oh a book worm how nice i like reading books on the cloud let\u2019s see what else Usr harry potter harry potter Good response Sys I believe that Harry Potter is a series of fantasy novels written by British author J. K. Rowling. Bad response Sys So Are you more interested in sports or video games? Table 8.2: Examples from the User feedback dataset for pairwise ranking \u2018Goodness\u2019 de\ufb01ned in this way allows us to evaluate how well our two approximated training signals can optimise for the user\u2019s satisfaction as explicitly expressed at the turn level, thus leading to our desired behaviour, i.e., producing long and engaging dialogues. The User feedback dataset contains 24,982 \u27e8context, good response, bad response \u27e9tuples in total. To evaluate the rankers on this dataset, we use precision@k, which is commonly used for information retrieval system evaluation (Eq. 8.3). P@k(c, R) = Pk i=1 Rel (c, Rk) k (8.3) where c is dialogue context, R is response candidates list, and Rel is a binary predicate indicating whether a particular response is relevant to the context. Precision is typically used together with recall and F-measure. However, since our dialogue data is extremely sparse so that it is hard to \ufb01nd multiple good responses for the same exact dialogue context, recall and F-measure cannot be applied to this setting. Therefore, since we only perform pairwise ranking, we use precision@1 to check that the good answer is the top-ranked one. Also due to data sparsity, we only perform this evaluation with gold positive responses and sampled negative ones \u2014 it is typically not possible to \ufb01nd a good response with exactly the same context as a given bad response. \fChapter 8. Data-E\ufb03ciency in Social Dialogue 129 Model P@1 (eval set) Loss (test set) Handcrafted 0.478 \u2014 VowpalWabbit@length 0.742 0.199 VowpalWabbit@rating 0.773 0.202 DualEncoder@length 0.365 0.239 DualEncoder@rating 0.584 0.247 Neural@length 0.824 0.139 Neural@rating 0.847 0.138 Table 8.3: Ranking models evaluation 8.5.2 Interim Results The results of our \ufb01rst experiment \u2014 namely, pairwise ranking precision on the independent User feedback dataset and loss on the Length/Rating test sets (Section 8.4) for the corresponding trainset sizes of 500,000 \u2014 are shown in Table 8.3. We can see that the neural ranker trained with user ratings clearly outperforms all the alternative approaches in terms of test set loss on its respective dataset as well as pairwise ranking precision on the evaluation dataset. Also note that both versions of the neural ranker stand extremely close to each other on both evaluation criteria, given a much greater gap between them and their next-best-performing alternatives, the linear rankers. The dual-encoder ranker turned out to be not an e\ufb03cient model for our problem, partly because it was originally optimised for a di\ufb00erent task as reported by Lu et al. (2017). 8.6 Training on Larger Amounts of Data A major advantage of training on raw dialogue transcripts is data volume: in our case, we have roughly twice as many raw dialogues as rated ones (cf. Section 8.4). This situation is very common in data-driven development: since data annotation is a very expensive and slow procedure, almost always there is signi\ufb01cantly more raw data than annotated data of a high quality. To illustrate this, we collected the extended training datasets of raw dialogues of up to 1,000,000 data points for training from the length signal. We trained our neural ranker and the VW ranker using the same con\ufb01guration as in Section 8.5. The results are shown in Figure 8.3, where we see that the neural ranker trained on the length signal consistently outperforms the ratings-based one. Its trend, although \ufb02uctuating, is more stable than that of VW \u2014 we believe that this is due to VW\u2019s inherent lower model capacity as well as its training setup, which is mainly optimised for speed. The \ufb01gure also shows that VW@length is worse than VW@rating, regardless of training data size. \fChapter 8. Data-E\ufb03ciency in Social Dialogue 130 5 \u00b7 105 6 \u00b7 105 7 \u00b7 105 8 \u00b7 105 9 \u00b7 105 1 \u00b7 106 0.7 0.75 0.8 0.85 0.9 0.95 1 Trainset size Precision@1 Neural@rating (baseline) VW @rating (baseline) Neural@length VW @length Figure 8.3: Comparison of rankers trained on extended datasets 8.7 Discussion Our evaluation results show that the neural ranker presented above is an e\ufb03cient approach to response ranking for social conversation. On a medium-sized training set, the two versions of the neural ranker, length and ratings-based, showed strongly superior performance to three alternative ranking approaches, and performed competitively with each other. Furthermore, the experiment with extended training sets shows that the accuracy of the length-based neural ranker grows steadily given more unannotated training data, outperforming the rating-based ranker with only slightly larger training sets. The overall results of our experiments con\ufb01rm that dialogue length, even approximated in quite a straightforward way, provides a su\ufb03cient supervision signal for training a ranker for a social conversation model. 8.8 Related Work Work on response ranking for conversational systems has been been growing rapidly in recent years. Some authors employ ranking based on heuristically de\ufb01ned measures: Yu et al. (2015, 2016) use a heuristic based on keyword matching, part-of-speech \ufb01lters, and Word2Vec similarity. Krause et al. (2017) apply standard information retrieval metrics (TF-IDF) with importance weighting for named entities. However, most of the recent research attempts to train the ranking \fChapter 8. Data-E\ufb03ciency in Social Dialogue 131 function from large amounts of conversational data, as we do. Some authors use task-based conversations, such as IT forums (Lowe et al., 2015) or customer services (Lu et al., 2017; Kumar et al., 2018), while others focus on online conversations on social media (e.g. Wu et al., 2016; Al-Rfou et al., 2016). The basic approach to learning the ranking function in most recent work is the same (e.g. Lowe et al., 2015; Al-Rfou et al., 2016; Wu et al., 2016): the predictor is taught to rank positive responses taken from real dialogue data higher than randomly sampled negative examples. Some of the approaches do not even include rich dialogue contexts and use only immediate contextresponse pairs for ranking (Ji et al., 2014; Yan et al., 2016; Lu et al., 2017). Some authors improve upon this basic scenario: Zhuang et al. (2018) take a desired emotion of the response into account; Liu et al. (2017) focus on the engagement of responses based on Reddit comments rating; Fedorenko et al. (2018) train the ranking model in several iterations, using highly ranked incorrect responses as negative examples for the next iteration. Nevertheless, to our knowledge, none of the prior works attempt to optimise for long-term dialogue quality; unlike in our work, their only ranking criterion is focused on the immediate response. 8.9 Conclusion We have presented a neural response ranker for open-domain \u2018social\u2019 dialogue systems and described two methods for training it using the common supervision signals coming from conversational data: user-provided ratings and dialogue length. We demonstrated its e\ufb03ciency by evaluating it using explicit positive feedback as a measure for user engagement. Speci\ufb01cally, trained on ratings, our neural ranker consistently outperforms several strong baselines; moreover, given larger amounts of data and only using conversation length as the objective, the ranker performs better the ratings-based one, reaching 0.87 Precision@1. This shows that conversation length can be used as an optimisation objective for generating engaging social dialogues, which means that we no longer need the expensive and time-consuming procedure of collecting per-dialogue user ratings, as was done for example in the Alexa Prize 2017 and is common practice in conversational AI research. Per-turn user ratings may still be valuable to collect for such systems, but these are even more expensive and problematic to obtain. Looking ahead, this advance will make data collection for social conversational agents simpler and less expensive in the future. \fChapter 9 Conclusions and Future Work In this thesis, we have presented a series of techniques towards enabling the development of data-e\ufb03cient and robust dialogue systems in a data-driven way. The core contributions of this thesis are the models for bootstrapping dialogue systems from minimal data. Our dialogue knowledge transfer model DiKTNet addresses the problem of training dialogue response generation systems in a few-shot setup, and the hybrid GRTr model is designed for the adaptation to a new domain where there exists support data to retrieve responses from. Our subsequent contributions presented in this thesis address potentially insu\ufb03cient robustness of the models trained from minimal data, addressing spoken dis\ufb02uency detection and OOD input detection problems. The multitask LSTM-based dis\ufb02uency detector supports incremental wordby-word processing and demonstrates generalisation potential beyond its main dataset \u2014 therefore, it can be used with a wide range of dialogue models, potentially improving their coverage of naturally varied input data with no extra training e\ufb00ort. In turn, the data-augmentation based training procedure for OOD handling is potentially applicable to the setups with the strictest training data limitations as it does not require having real OOD examples to adjust to and estimates the odds of encountering an unusual utterance in usual contexts. Finally, our study on data e\ufb03ciency in social dialogue sheds light on a way to continuously improve the performance of open-domain response rankers only using dialogue length as the main supervision signal \u2014 thus avoiding the dependence on user ratings which are cumbersome and expensive to collect and often times are overly noisy for the direct usage as a supervision signal. 132 \fChapter 9. Conclusions and Future Work 133 9.1 Directions for Future Work The \ufb01ndings we got from our studies in this thesis open up possibilities for further research \u2014 here, we will brie\ufb02y outline the most promising directions of future work. Firstly, the DiKTNet presented in Chapter 4, while outperforming the previous best model on the Stanford Multi-Domain dataset in both accuracy and the amount of in-domain data required, still has got several areas of improvement. We see a potential bene\ufb01t in bringing more structure to the latent dialogue representation which would intuitively correspond better to the actual structure of dialogue, e.g. as shown in Shi et al. (2019b). Another potential point of improvement is the system\u2019s goal-oriented performance: we assume that in a few-shot setup, it highly depends on the way in which the system handles the KB entries: a study of the particular copy mechanism employed in our setup and its optimality for the task might shed light on the ways to attain higher Entity F1 scores. Finally, an evaluation beyond the word overlap-based metrics is necessary for the adequate assessment of the system\u2019s performance in real-world settings. Similarly, the absolute results of GRTr, the winning entry at Dialog System technology Challenge 8 Fast Domain Adaptation task which we presented in Chapter 5, still suggest that domain adaptation for response generation needs further research towards achieving production-level performance. One promising direction that we are going to explore in our future work is the meta-learning framework, e.g. as used by Qian and Yu (2019). Based on splitting the task into multiple subtasks \u2014 with an independent copy of the base model for each one \u2014 and subsequently incorporating the individual training progress back into the base model, meta-learning approach will naturally \ufb01t the multi-domain, multi-task nature of the MetaLWOz dataset as well as lead to a potentially better \ufb01ne-tuning performance. For the multitask LSTM dis\ufb02uency detector presented in Chapter 6, we see the main issue being the modest out-of-dataset generalisation performance. Although it was a common issue for all the models we evaluated, attaining a more practical level of generalisation is key for making this model a truly reusable component for a wide range of dialogue system pipelines \u2014 therefore, addressing this issue is our next step in this direction. As such, we will explore possibilities of knowledge transfer to new closed domains in a 1-shot setting, both with regular supervised training and unsupervised LM \ufb01ne-tuning. Our study on robustness to out-of-domain input presented in Chapter 7 of this thesis also leaves space for further exploration. As such, we presented and evaluated a series of models for the detection of OOD utterances \u2014 all based on autoencoders of di\ufb00erent types. It therefore makes sense to explore other ways of scoring OOD utterances than autoencoders: for example, Generative Adversarial Networks (GANs) have great potential, especially for their inherent capabilities to produce realistic data samples. We are also interested in using traditional generative models to produce more realistic counterfeit user utterances. Finally, it is worth noting that our \fChapter 9. Conclusions and Future Work 134 method is conceptually designed for improving robustness in extreme data-e\ufb03cient settings (i.e. 1-shot/few-shot training) by not requiring any other data than the available IND dialogue examples. However, in the interest of comparability with the original HCN model, we performed our experiments with full data \u2014 therefore, few-shot evaluation of our models is the immediate next step in this research direction for us. The \ufb01nal study of this thesis presented in Chapter 8 addressed the problems of social, or chatoriented dialogue. As such, our neural model for conversational response ranking together with the technique of training it from raw data con\ufb01rmed that dialogue length, even approximated in quite a straightforward way, provides a su\ufb03cient supervision signal for training a ranker for a social conversation model. In our future work, we will attempt to further improve the model using the same data in an adversarial setup following Wang et al. (2017a). We also plan to directly train our model for pairwise ranking in the fashion of Burges et al. (2005) instead of the current point-wise approach. Finally, we are going to employ contextual sampling of negative responses using the approximate nearest neighbour search (Johnson et al., 2017) in order to perform a more e\ufb03cient pairwise training. The above techniques have the potential to make an impact in industrial data-driven dialogue system development. For that, they can be incorporated into a user-friendly \u2018data e\ufb03ciency toolkit\u2019 aimed at usage by non-experts. As the main way to control the behaviour of the purely data-driven systems assumes passing the corresponding training examples into the pipeline, there is the need for the means to analyse and monitor the performance of the dialogue system\u2019s key components in a transparent way for the end user. This can be done in terms of the speci\ufb01c decisions made by the system and the training examples contributed to those. Although explainable machine learning was not the scope of this work, it is important to point out that such means are vital for a data-driven product targeted at non-expert users. Crucially, the models and techniques presented here were evaluated in certain setups which are mainly motivated by reproducibility and fair comparison to the previous models for the respective tasks. These setups do not necessarily correspond to the designated settings of limited training data. For example, the dis\ufb02uency detection model as well as the OOD detection models were trained and evaluated on the full datasets, and therefore a few-shot evaluation with the possible subsequent \ufb01ne-tuning of the models is needed in order to make the them practically justi\ufb01ed. We see a strong potential for a signi\ufb01cant impact \u2014 both in academia and industry \u2014 of the data-e\ufb03cient techniques presented in this thesis and will continue pursuing e\ufb00orts to ensure wide applicability of those in real-world scenarios following the steps outlined above. We hope that the open-source resources released as part of this work will foster further research in this direction across the dialogue research community. \fAppendix A Dialogue Knowledge Transfer Networks \u2014 Supplementary Material Navigation Weather Schedule BLEU, % Entity F1, % BLEU, % Entity F1, % BLEU, % Entity F1, % ZSDG 5.9 14.0 8.1 31 7.9 36.9 NLU_ZSDG 6.1 \u00b1 2.2 12.7 \u00b1 3.3 5.0 \u00b1 1.6 16.8 \u00b1 6.7 6.0 \u00b1 1.7 26.5 \u00b1 5.4 NLU_ZSDG+Stage1 7.9 \u00b1 1 12.3 \u00b1 2.9 8.7 \u00b1 0.6 21.5 \u00b1 6.2 8.3 \u00b1 1 20.7 \u00b1 4.8 HRED@1% 6.0 \u00b1 1.8 9.8 \u00b1 4.8 6.9 \u00b1 1.1 22.2 \u00b1 10.7 5.5 \u00b1 0.8 25.6 \u00b1 8.2 HRED@3% 7.9 \u00b1 0.7 11.8 \u00b1 4.4 9.6 \u00b1 1.8 39.8 \u00b1 7 8.2 \u00b1 1.1 34.8 \u00b1 4.4 HRED@5% 8.3 \u00b1 1.3 15.3 \u00b1 6.3 11.5 \u00b1 1.6 38.0 \u00b1 10.5 9.7 \u00b1 1.4 37.6 \u00b1 8.0 HRED@10% 9.8 \u00b1 0.8 19.2 \u00b1 3.2 12.9 \u00b1 2.4 40.4 \u00b1 11.0 12.0 \u00b1 1.0 38.2 \u00b1 4.2 HRED+VAE@1% 3.6 \u00b1 2.6 9.3 \u00b1 4.1 6.8 \u00b1 1.3 23.2 \u00b1 10.1 4.6 \u00b1 1.6 28.9 \u00b1 7.3 HRED+VAE@3% 6.9 \u00b1 1.9 15.6 \u00b1 5.8 9.5 \u00b1 2.6 32.2 \u00b1 11.8 6.6 \u00b1 1.7 34.8 \u00b1 7.7 HRED+VAE@5% 7.8 \u00b1 1.9 12.7 \u00b1 4.2 10.1 \u00b1 2.1 40.3 \u00b1 10.4 8.2 \u00b1 1.7 34.2 \u00b1 8.7 HRED+VAE@10% 9.0 \u00b1 2.0 18.0 \u00b1 5.8 12.9 \u00b1 2.2 40.1 \u00b1 7.6 11.6 \u00b1 1.5 39.9 \u00b1 6.9 HRED+Stage1@1% 7.1 \u00b1 0.8 10.1 \u00b1 4.5 10.6 \u00b1 2.1 31.4 \u00b1 8.1 7.4 \u00b1 1.2 29.1 \u00b1 6.6 HRED+Stage1@3% 9.2 \u00b1 0.8 14.5 \u00b1 4.8 13.1 \u00b1 1.7 40.8 \u00b1 6.1 9.2 \u00b1 1.2 32.7 \u00b1 6.1 HRED+Stage1@5% 10.3 \u00b1 1.2 15.6 \u00b1 4.5 14.5 \u00b1 2.2 40.9 \u00b1 8.6 11.8 \u00b1 1.9 37.6 \u00b1 6.1 HRED+Stage1@10% 12.3 \u00b1 0.9 17.3 \u00b1 4.5 17.6 \u00b1 1.9 47.5 \u00b1 6.0 15.2 \u00b1 1.6 38.7 \u00b1 8.4 HRED+ELMo@1% 5.8 \u00b1 1.9 18.2 \u00b1 3.8\u22c6 7.3 \u00b1 2.6 38.5 \u00b1 11.1 6.3 \u00b1 2.6 36.3 \u00b1 9.2 HRED+ELMo@3% 8.0 \u00b1 1.3 17.2 \u00b1 4.2 10.6 \u00b1 1.1 42.0 \u00b1 11.0 9.5 \u00b1 2.0 39.6 \u00b1 9.2 HRED+ELMo@5% 9.4 \u00b1 0.8 21.5 \u00b1 7.3 12.1 \u00b1 2.0 39.0 \u00b1 12.8 11.3 \u00b1 2.1 40.0 \u00b1 5.6 HRED+ELMo@10% 9.9 \u00b1 1.1 24.3 \u00b1 5.7 14.9 \u00b1 2.7 41.4 \u00b1 12.0 14.5 \u00b1 1.4 43.4 \u00b1 3.9 DiKTNet@1% 8.4 \u00b1 0.7\u22c6 15.2 \u00b1 4.0 11.5 \u00b1 1.7\u22c6 43.0 \u00b1 10.5\u22c6 8.1 \u00b1 0.8\u22c6 40.5 \u00b1 6.3\u22c6 DiKTNet@3% 10.4 \u00b1 1.2 19.2 \u00b1 4.8 15.7 \u00b1 2.1 44.0 \u00b1 11.7 11.1 \u00b1 1.3 38.2 \u00b1 5.8 DiKTNet@5% 11.5 \u00b1 1.1 23.9 \u00b1 2.9 15.5 \u00b1 2.1 39.5 \u00b1 14.8 13.7 \u00b1 2.0 41.1 \u00b1 3.8 DiKTNet@10% 12.9 \u00b1 1.0 26.8 \u00b1 4.2 20.4 \u00b1 1.2 48.0 \u00b1 5.6 17.5 \u00b1 1.3 42.8 \u00b1 2.6 Table A.1: Evaluation results. Marked with asterisks are individual results higher than ZSDG\u2019s performance and which are achieved with the minimum amount of training data. In bold is the model consistently outperforming ZSDG in all domains and metrics with minimum data. 135 \fAppendix A. DiKTNet 136 Domain #Dialogues Domain #Dialogues UPDATE CALENDAR 1991 GUINESS CHECK 1886 ALARM SET 1681 SCAM LOOKUP 1658 PLAY TIMES 1601 GAME RULES 1590 CONTACT MANAGER 1483 LIBRARY REQUEST 1339 INSURANCE 1299 HOME BOT 1210 HOW TO BASIC 1086 CITY INFO 965 TIME ZONE 951 TOURISM 935 SHOPPING 903 BUS SCHEDULE BOT 898 CHECK STATUS 784 WHAT IS IT 776 STORE DETAILS 737 APPOINTMENT REMINDER 668 PRESENT IDEAS 664 GEOGRAPHY 653 SKI BOT 607 MOVIE LISTINGS 607 UPDATE CONTACT 581 ORDER PIZZA 577 EDIT PLAYLIST 574 SPORTS INFO 561 BOOKING FLIGHT 555 WEATHER CHECK 551 EVENT RESERVE 539 RESTAURANT PICKER 535 LOOK UP INFO 533 AUTO SORT 514 QUOTE OF THE DAY BOT 513 WEDDING PLANNER 510 MAKE REST. RESERVATIONS 510 AGREEMENT BOT 507 NAME SUGGESTER 499 APARTMENT FINDER 499 HOTEL RESERVE 497 PHONE PLAN BOT 496 DECIDER BOT 487 VACATION IDEAS 486 PHONE SETTINGS 473 POLICY BOT 447 PROMPT GENERATOR 446 MUSIC SUGGESTER 445 PET ADVICE 426 BANK BOT 367 CATALOGUE BOT 288 Table A.2: MetaLWOz domains MetaLWOz \u2014 dialogue examples Domain: Ski User Task: You want to know if there are good ski hills an hour\u2019s drive from your current location. Bot Task: Tell the user that there are no ski hills in their immediate location. Bot Hello how may I help you? User Is there any good ski hills an hour\u2019s drive from my current location? Bot I\u2019m sorry to inform you that there are no ski hills in your immediate location User Can you help me \ufb01nd the nearest? Bot Absolutely! It looks like you\u2019re about 3 hours away from Bear Mountain. That seems to be the closest. User Hmm.. sounds good Bot Alright! I can help you get your lift tickets now!When will you be going? User Awesome! please get me a ticket for 10pax \fAppendix A. DiKTNet 137 Bot You\u2019ve got it. Anything else I can help you with? User None. Thanks again! Bot No problem! Domain: Game rules User Task: Ask how to win at the text adventure Zork. If the Bot cannot help you, ask them what games it can help you with. Bot Task: Tell the user that you are programmed to help them play games, not win them. If the user starts talking about other games, go along with them and try to help. Bot Hello how may I help you? User I want to know how I can win playing Zork? Bot I am programmed to help you play games, not win them User What games can you help me with? Bot Any game User Can you specify a certain game? Bot Zork User What are the rules of Zork? Bot You have to follow through a cave adventure User What is the ending of the game? Bot The user comes out of the cave Domain: Music suggestion User Task: Tell the bot that you like a certain genre of music, but don\u2019t prompt it to give you suggestions Bot Task: Provide suggestions, even if the user doesn\u2019t ask for them (or tells you they don\u2019t need any) Bot Hello how may I help you? User Hello. I like classic rock music. Bot Dream on Aerosmith User No, I\u2019m not a huge fan of that song. Is there anything else available? Bot Sweet Home Alabama User Well, I do like Lynyrd Skynyrd, but that song is so cliche. Do you have anything else by the same band? Bot Free bird User Yes, that is the perfect song for how I am feeling right now! Bot Great, Is there anything else? User No, but I really appreciate your assistance. Thank you so much! Bot Youre welcome \fAppendix A. DiKTNet 138 Domain: Information retrieval User Task: Describe a movie where these guys have to climb a mountain and then throw a thing into the mountain and ask the bot if they know what the movie is called. Bot Task: Ask the user to elaborate as much as possible about the movie they are asking about, and then suggest they\u2019re thinking of Lord of the Rings. Bot Hello how may I help you? User I\u2019m trying to remember this movie. Can you help? Bot I would be happy to help. Can you tell me what it is about? User In it these guys have to climb up a mountain. Then they throw something into the mountain. Bot Do you know what genre the movie is, such as comedy, horror, sci-\ufb01, or fantasy? User Not sure about that. Bot Do you any of the actor or character names in the movie? User No... All I can remember is the stu\ufb00about the mountain. Bot Is it a new movie, or an older movie? User I don\u2019t know. Bot It sounds like the movie you are talking about could be Lord of the Rings. Does that sound right? User It could be that. Thanks for the help. \fAppendix B Generative-Retrieval Transformers \u2014 Supplementary Material BLEU-1 BLEU-2 BLEU-3 CIDEr METEOR ROUGE-L Retrieval, BERT 7.93 4.43 2.87 12.56 7.38 6.91 Retrieval, SP+FT 9.57 5.37 3.45 14.32 6.98 7.19 HRED 8.66 3.86 2.11 13.73 6.02 7.75 GPT-2 base1 8.2 3.95 2.22 16.41 6.1 8.34 GPT-2 +sup2 11.33 6.45 4.17 23.38 8.23 10.74 GRTr 12.73 7.43 4.88 28.74 9.18 11.77 1 does not use support set.2 \ufb01ne-tuned to support set, but does not use retrieval logic Table B.1: Automatic evaluation results on MetaLWOz pure task BLEU-1 BLEU-2 BLEU-3 CIDEr METEOR ROUGE-L Retrieval, BERT 5.35 2.16 1.05 4.98 4.56 4.52 Retrieval, SP+FT 5.94 2.25 0.93 4.69 4.29 4.53 HRED 8.94 3.87 2.02 12.65 6.05 7.55 GPT-2 base 8.37 3.8 2.05 15.6 6.17 8.55 GPT-2 +sup 10.21 5.26 2.95 18.06 7.06 9.59 GRTr 10.39 5.31 2.95 18.26 7.1 9.27 Table B.2: Automatic evaluation results on MetaLWOz cross-task 139 \fAppendix B. GRTr 140 Context Wiz Hello how may I help you? Usr I\u2019m trying to book rooms Wiz For where? Usr I need a few hotel rooms in Tucson Wiz how many total rooms? Gold response Usr I need ten rooms Generated candidate Usr Five rooms [2.794] Retrieved candidate Usr I need 4 rooms on the same \ufb02oor [2.793] Context Wiz Hello how may I help you? Usr Want some info about Cyprus. Wiz What would you like to know about Cyprus? Gold response Usr What\u2019s best time to visit there? Generated candidate Usr What is the best time to visit? [1.088] Retrieved candidate Usr What is the best time to visit Cyprus? [1.087] Context Wiz Hello how may I help you? Usr I need some help. Wiz yes, how can i help you? Usr I\u2019m going to Montreal, and good festivals? Wiz yes, there will have music festivals and dancing party. and wine testing party. Usr In the summer? Wiz yes, it will in the august. Usr September? Wiz there only have wine testing party for september. Gold response Usr Bummer Generated candidate Usr ok thanks [-1.282] Retrieved candidate Usr That sounds interesting. Can you book me for that day? [\u22121.274] Context Wiz Hello how may I help you? Usr I need some ideas Wiz Sure About what? Usr Where\u2019s a good place to bring kids for vacation Wiz I highly suggest Disney Land Usr That\u2019s a good idea. When is a good time for us to go? When it\u2019s not so hot? Wiz But the place is vey expensive Gold response Usr Yeah, I\u2019m okay with that Generated candidate Usr I don\u2019t think that\u2019s a good idea. What are the best places to bring kids? [0.375] Retrieved candidate Usr Oh I know we\u2019d never go if we didn\u2019t get a Florida discount on tickets [0.384] Context Wiz Hello how may I help you? Usr I need to book a \ufb02ight and i need it by tonight Wiz Ok, book a \ufb02ight, it will be expensive at this late date Usr Book me a \ufb02ight to hawaii for 2 people leaving tonight Wiz Ok, let me see what I have, I have two tickets for 100,000 dollars Gold response Usr Book it. Money is no limit Generated candidate Usr Money is not a problem for us [0.805] Retrieved candidate Usr Sounds good I\u2019ll take them [0.819] Table B.3: GRTr predictions with the closest generated/retrieved candidates \u2014 MetaLWOz pure task (in bold is the model\u2019s \ufb01nal response) \fAppendix B. GRTr 141 Context Wiz Hello how may I help you? Usr Hello. I need to book a \ufb02ight for two Wiz ok where are you going? Usr I will be heading to Hawaii and I need to leave tonight Wiz will you be bringing extra luggage? Usr No, I will need \ufb01rst class seats. Money is no object Wiz i have 2 tickets for $50,000 Usr Okay, that will be \ufb01ne. Please purchase with my credit card on \ufb01le Wiz just to verify, what is the last 2 digits of your credit card? Gold response Usr The last two digits of my credit card are 00 Generated candidate Usr The last 2 digits are: 5555 [5.081] Retrieved candidate Usr Sure. Just sent it to my email. Thanks! [\u22129.761] Context Wiz Hello how may I help you? Usr I need an idea. Wiz Sure, what\u2019s up? Usr Can use advise me on a vacation spot? Wiz Yes. Disney World and Disney Land are always fun for children and adults, but it\u2019s really expensive Usr Anything closer? Wiz Have you ever been to Niagara Falls? Usr Yes,but that\u2019s too far away. Wiz Where is your location? Gold response Usr Nebraska Generated candidate Usr I\u2019m in New York. [3.711] Retrieved candidate Usr ok, thanks [\u22129.605] Context Wiz Hello how may I help you? Usr I need to book a room for me Wiz OK. Where do you want to stay? Usr In downtown Toronto Wiz And for what dates would you like the room? Usr I need a 2 bedroom on the second \ufb02oor I\u2019d like to stay for this weekend Wiz And how many people will be in your party? Gold response Usr 2 Generated candidate Usr Just myself and a few friends please [3.82] Retrieved candidate Usr Price isn\u2019t a concern. I would like a room on the second \ufb02oor with 2 beds if that helps. [\u22129.183] Context Wiz Hello how may I help you? Usr I need some ideas. Wiz Okay, what about? Usr I\u2019m going to be taking my vacation soon Wiz Where will you like to go? Gold response Usr I need someone good for the kids. somewhere* Generated candidate Usr Well, I\u2019m going to be in Miami [2.13] Retrieved candidate Usr yes please. [\u221210.672] Context Wiz Hello how may I help you? Usr I have 5 days of vacation next month I would like to go somewhere Wiz Do you have any favorite places you have in mind? Usr Anywhere in colorado, wyoming, montana Wiz Let me look that up for you! If you\u2019re planning for Denver, Colorado. I see a few popular tourist attractions: Denver Art Museum and Elitch Gardens Theme Park Would you like to vacation there, or would you like me to continue based on your list of favorite places? Gold response Usr That\u2019s a good idea. When is a good time for us to go? When it\u2019s not so hot? Generated candidate Usr Sure, I\u2019ll take it! Thanks for the suggestions [-0.11] Retrieved candidate Usr What are some cheap Florida places to go? [\u221211.762] Table B.4: GRTr predictions with the most distant generated/retrieved candidates \u2014 MetaLWOz pure task (in bold is the model\u2019s \ufb01nal response) \fAppendix B. GRTr 142 Context Wiz Hello how may I help you? Usr help me book a \ufb02ight Wiz do you have a particular place in mind? Usr i hear Greece is nice this time of year. how do i get a \ufb02ight there? Wiz yes, it is; i can book your \ufb02ights now, if you like Usr wait don\u2019t! i am only curious on how to \ufb02y there Wiz AA Airlines \ufb02y to Greece everyday Usr how much is the ticket? Wiz we have specials this month for $350 roundtrip Gold response Usr i don\u2019t have enough funds. i\u2019ll save up Generated candidate Usr how much is the \ufb02ight to Germany? [1.569] Retrieved candidate Usr thanks [1.57] Context Wiz Hello how may I help you? Usr Hello bot! Wiz Hi, how can I help you? Usr Are there any good festivals in Montreal this summer? Wiz Yes there are a few, would you like me to list them? Usr List the best one. Wiz In my opinion the best one is the beer festival! Gold response Usr Awesome, I love to drink. How much are tickets. Generated candidate Usr Okay. I\u2019ll go with that. [1.754] Retrieved candidate Usr I like that idea. [1.752] Context Wiz Hello how may I help you? Usr Hi! I need help with my travelling. Wiz How can I help you? Okay good. I can o\ufb00er you a \ufb02ight to Franc, England, Germany, and Italy Usr Sure. Can you get me a \ufb02ight to Germany by next tuesday? Wiz Alright done. Flight to Gemany. Gold response Usr How much does it cost? Generated candidate Usr Okay. I need to book a \ufb02ight to Germany on Monday. [\u22121.916] Retrieved candidate Usr Perfect, what\u2019s the price tag? [\u22121.921] Context Wiz Hello how may I help you? Usr I am heading to Montreal this Summer, any good festivals around that time? Wiz Montreal Pride Usr What will I do there? Wiz It\u2019s a music fest Gold response Usr Great! I like that bot Generated candidate Usr Is there anything else? [\u22120.309] Retrieved candidate Usr Other suggestions? [\u22120.3] Context Wiz Hello how may I help you? Usr i need help with vacations Wiz How can I help you with that? Usr whats a good place to bring kids Wiz Disney World or Disney Land is always fun! A little pricey but fun. Gold response Usr anything else? Generated candidate Usr ok i will go there [0.799] Retrieved candidate Usr sounds good, book tickets [0.812] Table B.5: GRTr predictions with the closest generated/retrieved candidates \u2014 MetaLWOz cross-task (in bold is the model\u2019s \ufb01nal response) \fAppendix B. GRTr 143 Context Wiz Hello how may I help you? Usr i need a room Wiz Sure, where are you looking for a room at? Usr with 2 bedrooms Wiz Okay, I can help with that. Usr and it is located on the second \ufb02oor of a hotel Wiz Okay, what city are you interested in? Usr te speci\ufb01c place is in toronto Wiz What date would you like to book the room for? Gold response Usr Next monday Generated candidate Usr May 16th [2.642] Retrieved candidate Usr Thanks a lot! [\u221213.151] Context Wiz Hello how may I help you? Usr Hey, I\u2019ve got a question about tourism Wiz Sure, I can help with that. Usr I\u2019m curious if there are any bars around here Wiz Could you specify your location for me? Gold response Usr Where I am? Generated candidate Usr I\u2019m in Tampa, I need some info on the best time to visit the city [3.008] Retrieved candidate Usr I need some information about Rome and where to go. [\u221212.744] Context Wiz Hello how may I help you? Usr I\u2019m here visiting, are there any good bars in the area? Wiz I\u2019m sorry I can\u2019t help unless you clarify your question Usr I\u2019m looking for good bars in the area, craft beer or wine? Wiz What area? Usr Downtown in the city Wiz What city? Gold response Usr Tampa Generated candidate Usr Montreal [4.48] Retrieved candidate Usr Ok thanks for the tip [\u221210.164] Context Wiz Hello how may I help you? Usr I am a history bu\ufb00. Do you know if there are any great historical sites to visit in Rome? Wiz Leonardo da Vinci was one of the greatest inventors of all time and they have a museum for him there Usr Oh that is so cool. I want to see that. Are there any other places to visit? Wiz Well if you want to go look at a bunch of old bones of dead people that have died throughout the ages you can always go to the catacombs Gold response Usr Okay that sounds like a great place to go also. Keep the ideas coming Generated candidate Usr Oh, that sounds fun. I\u2019ll do that. [2.109] Retrieved candidate Usr I would like to know more about the air-balloon rides [\u221212.116] Context Wiz Hello how may I help you? Usr Can you help me with information on Montreal? Wiz What would you like to know? Gold response Usr Are there any festivals there in the summer? Generated candidate Usr What is the tallest building there? [1.905] Retrieved candidate Usr Ok, thank you. I was wondering where in Rome I can visit to learn about it\u2019s history. [\u221212.161] Table B.6: GRTr predictions with the most distant generated/retrieved candidates \u2014 MetaLWOz cross-task (in bold is the model\u2019s \ufb01nal response) \fAppendix B. GRTr 144 Context Wiz Hello how may I help you? Usr I am looking for a place to dine. The restaurant should be expensive and should serve Chinese food. Wiz Would you like the restaurant in a speci\ufb01c place? Gold response Usr Not in any speci\ufb01c area please. I need to book a table for 3 at 14:00 on Saturday. Generated candidate Usr Do you have the address? [\u22121.633] Retrieved candidate Usr I am looking for a restaurant in the centre of town, please. [\u22121.635] Context Wiz Hello how may I help you? Usr I need a hotel in the south of the city, it is ok if the hotel doesn\u2019t has free parking. Wiz I have one hotel that meets your needs . It \u2019s the Lens\ufb01eld Hotel that o\ufb00ers free wi\ufb01and parking . They have a 3 star rating . Would you like more info on it ? Usr What is the price of the Lens\ufb01eld? Wiz The Lens\ufb01eld Hotel is expensive. Usr Is there a moderate place in the in the south that has free parking? Wiz No, no hotels meet your criteria. Usr Are there any moderately priced hotels that don\u2019t have free parking, but have wi\ufb01? Wiz We have 2 locations that are in the south , moderately priced , and have free wi\ufb01. They are both guesthouses , would one of those meet your needs ? Gold response Usr Yes, can you give me the address of your favourite one? Generated candidate Usr Can you book me for 2 nights for 2 nights starting from Wednesday. [\u22120.013] Retrieved candidate Usr Thanks for the information! [\u22120.01] Context Wiz Hello how may I help you? Usr I am hoping you might be able to help me \ufb01nd a place to stay last minute . I am on a bit of a budget so I need something moderate . Wiz I have three hotels available. The Ashley Hotel, Cityroomz, and Lovell Lodge. Do you need parking? Usr I need one in the centre and has free wi\ufb01. Wiz Then Cityroomz is just for you! It is located at sleeperz hotel, station road. Do you need more information or do you want to book a room ? Gold response Usr That is all that I need. Thank you so much. Generated candidate Usr I would like to book it for 4 people and 1 night starting on Saturday. [\u22120.624] Retrieved candidate Usr Okay, I \u2019d like to book a room at the Gonville Hotel for 4 nights. There will be 6 people and we will be arriving on Saturday. [\u22120.628] Context Wiz Hello how may I help you? Usr I am looking for information about the Parkside Police Station. Wiz Sure, Parkside Police Station Telephone 01223358966, Address Parkside, Cambridge, Postcode CB11JG. Is there anything else I can do for you today ? Gold response Usr No, that\u2019s everything I needed, thanks. Generated candidate Usr No, that is all, thank you. [2.783] Retrieved candidate Usr Yes, I \u2019ll also need their phone number. [2.779] Context Wiz Hello how may I help you? Usr I\u2019m looking for a place to dine that serves British food. It should be in the moderate price range. Wiz How about The Oak Bistro? It\u2019s located in the Centre. Would you like more information or to book a table? Gold response Usr The Oak Bistro sounds fabulous. Can I get a table there on Sunday at 18:30? There will be 6 of us. Generated candidate Usr Yes, I need to book a table for 5 people at 15:30 on saturday. [1.259] Retrieved candidate Usr Yes, I \u2019d like a table for a party of 8 at 11:15 on Thursday. [1.255] Table B.7: GRTr predictions with the closest generated/retrieved candidates \u2014 MultiWOZ (in bold is the model\u2019s \ufb01nal response) \fAppendix B. GRTr 145 Context Wiz Hello how may I help you? Usr i am looking for a train. The train should leave on Monday and should arrive by 12:45. Wiz Where would you like to go? Gold response Usr I would like to depart from Cambridge, and travel to Bishops Stortford. Generated candidate Usr i would like to go to Cambridge. [5.253] Retrieved candidate Usr I would like to leave after 09:00 on Sunday. [\u221211.019] Context Wiz Hello how may I help you? Usr I\u2019m looking for a restaurant in the west side that\u2019s in the moderate price range. Wiz I have three listings, one for british, indian, italian. Are any of those types of food you are looking for? Usr Are any of them available for 8 people at 19:00 on Wednesday? Wiz There are two restaurants available the Indian restaurant Meghna and the Italian restaurant Prezzo 8 people at 19:00 on Wednesday . Which one do you prefer ? Gold response Usr The Italian restaurant Prezzo sounds good. Generated candidate Usr Prezzo 8 will be great, how about Meghna? [7.966] Retrieved candidate Usr Let\u2019s try the Brasserie. [\u22126.55] Context Wiz Hello how may I help you? Usr Hello there. I am trying to locate a train for Monday. Wiz Sure. Where will you be going? Usr I am going to Cambridge. Wiz All right, and where would you like to depart from? Usr I \u2019ll will be leaving from London Liverpool Street. Wiz I\u2019ve got 10 trains that are on that route. Do you have a preferred departure or arrival time? Usr I need on that will leave London after 21:00. Wiz I have two trains that will leave after 21:00 . One leaves at 21:39 and one leaves at 23:39 . Which would you prefer ? Gold response Usr What time does the one that leaves at 21:39 arrive in Cambridge? Also, what is the train ID? Generated candidate Usr I \u2019d like to leave after 21:00 and leave the train that leaves after 21:00 [\u221212.611] Retrieved candidate Usr I don\u2019t actually need to book at the moment. I just need to know the departure time and travel time for this train, if you don\u2019t [0.699] Context Wiz Hello how may I help you? Usr I am looking for city centre north b and b Wiz I have found the guesthouse you were wanting. Would you like me to book this for you? Gold response Usr Yes, please book it for 1 person and for 5 nights starting Friday. Generated candidate Usr Yes please book it for 5 people. [0.706] Retrieved candidate Usr Do any of those include free parking? [\u221211.545] Context Wiz Hello how may I help you? Usr Hi, I\u2019m looking for some train information. Could you tell me what trains leave on Wednesday for Norwich? Wiz There are 19 entries found. Where would you be coming from? Usr I \u2019ll be departing from Cambridge and I need to arrive by 12:00. Wiz There is a train that arrives at 11:55. The trainID is TR9635. Would you like me to book that? Gold response Usr Sure, that sounds great. Generated candidate Usr Yes, that would be great. [4.836] Retrieved candidate Usr Great can I get TR5173 booked for 3 people please? [\u22127.307] Table B.8: GRTr predictions with the most distant generated/retrieved candidates \u2014 MultiWOZ (in bold is the model\u2019s \ufb01nal response) \fAppendix B. GRTr 146 0 2 4 6 8 10 12 14 16 0 200 400 600 800 1,000 1,200 Distance Count Figure B.1: MetaLWOz pure task: a histogram of pairwise distances between generated and retrieved GRTr candidates 0 2 4 6 8 10 12 14 16 0 200 400 600 800 1,000 Distance Count Figure B.2: MetaLWOz cross-task: a histogram of pairwise distances between generated and retrieved GRTr candidates 0 2 4 6 8 10 12 14 16 0 500 1,000 1,500 Distance Count Figure B.3: MultiWOZ: a histogram of pairwise distances between generated and retrieved GRTr candidates \fAppendix C Dis\ufb02uency Detection \u2014 Supplementary Material Parameter Value Optimiser stochastic gradient descent Loss function weighted cross-entropy Vocabulary size 6157 Embedding size 128 MLP layer sizes [128] Learning rate 0.01 Learning rate decay 0.9 Batch size 32 \u03b1 0.1 \u03bb 0.001 \u03b3 1.05 Table C.1: Multi-task LSTM training setup Label type Label Frequency Fluent token 574 771 Edit token 45 729 Single-token substitution 13 003 Single-token deletion 1011 Multi-token substitution start 6976 Multi-token substitution end 6818 Table C.2: SWDA labels 147 \fAppendix D Data-E\ufb03ciency in Social Dialogue \u2014 Supplementary Material Parameter Value vocabulary size 60000 learning rate 0.01 embedding size 256 RNN cell type GRU optimiser Adagrad loss function MSE dropout ratio 0.4 predictor layer sizes [256] (length), [128, 32, 32] (rating) batch size 8 max utterance length 30 tokens Table D.1: Neural rankers training setup 148 \fAppendix D. Social Dialogue 149 Parameter Value feature bit length 16 loss function squared features context ngrams, response ngrams, turn number, bot, utterance length, handcrafted features quadratic features response ngrams \u00d7 response ngrams, context ngrams \u00d7 response ngrams, bot name \u00d7 response ngrams, bot name \u00d7 context ngrams, bot name \u00d7 handcrafted features cubic features bot name \u00d7 context ngrams \u00d7 response ngrams holdout set o\ufb00 passes number 1 Table D.2: VowpalWabbit ranker training setup \fBibliography Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Man\u00e9, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Vi\u00e9gas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensor\ufb02ow.org. Agarwal, A., Chapelle, O., Dud\u00edk, M., and Langford, J. (2014). A reliable e\ufb00ective terascale linear learning system. Journal of Machine Learning Research, 15(1):1111\u20131133. Aist, G., Allen, J., Campana, E., Gallo, C. G., Stoness, S., Swift, M., and Tanenhaus, M. K. (2007). Incremental dialogue system faster than and preferred to its nonincremental counterpart. In Annual Conference of the Congitive Science Society. Al-Rfou, R., Pickett, M., Snaider, J., Sung, Y.-h., Strope, B., and Kurzweil, R. (2016). Conversational Contextual Cues: The Case of Personalization and History for Response Ranking. CoRR, abs/1606.00372. Allen, J. F., Byron, D. K., Dzikovska, M., Ferguson, G., Galescu, L., and Stent, A. (2001). Toward conversational human-computer interaction. AI Mag., 22(4):27\u201337. Asri, L. E., Schulz, H., Sharma, S., Zumer, J., Harris, J., Fine, E., Mehrotra, R., and Suleman, K. (2017). Frames: a corpus for adding memory to goal-oriented dialogue systems. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, Saarbr\u00fccken, Germany, August 15-17, 2017, pages 207\u2013219. Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Bahdanau, D., Cho, K., and Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. 150 \fBibliography 151 Baumgartner, J., Zannettou, S., Keegan, B., Squire, M., and Blackburn, J. (2020). The pushshift reddit dataset. CoRR, abs/2001.08435. Bellman, R. and Kalaba, R. (1957). On the role of dynamic programming in statistical communication theory. IRE Trans. Information Theory, 3(3):197\u2013203. Berner, C., Brockman, G., Chan, B., Cheung, V., Debiak, P., Dennison, C., Farhi, D., Fischer, Q., Hashme, S., Hesse, C., J\u00f3zefowicz, R., Gray, S., Olsson, C., Pachocki, J., Petrov, M., de Oliveira Pinto, H. P., Raiman, J., Salimans, T., Schlatter, J., Schneider, J., Sidor, S., Sutskever, I., Tang, J., Wolski, F., and Zhang, S. (2019). Dota 2 with large scale deep reinforcement learning. CoRR, abs/1912.06680. Blanco, R., Ottaviano, G., and Meij, E. (2015). Fast and space-e\ufb03cient entity linking in queries. In Proceedings of the Eight ACM International Conference on Web Search and Data Mining, WSDM 15, New York, NY, USA. ACM. Bordes, A., Boureau, Y., and Weston, J. (2017). Learning end-to-end goal-oriented dialog. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Bortfeld, H., Leon, S. D., Bloom, J. E., Schober, M. F., and Brennan, S. E. (2001). Dis\ufb02uency rates in conversation: E\ufb00ects of age, relationship, topic, role, and gender. Language and speech, 44(2):123\u2013147. Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT\u20192010, pages 177\u2013186, Heidelberg. Physica-Verlag HD. Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., J\u00f3zefowicz, R., and Bengio, S. (2016). Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 10\u201321. Brennan, S. and Schober, M. (2001). How listeners compensate for dis\ufb02uencies in spontaneous speech. Journal of Memory and Language, 44(2):274\u2013296. Budzianowski, P. and Vulic, I. (2019). Hello, it\u2019s GPT-2 how can I help you? towards the use of pretrained language models for task-oriented dialogue systems. In Proceedings of the 3rd Workshop on Neural Generation and Translation@EMNLP-IJCNLP 2019, Hong Kong, November 4, 2019, pages 15\u201322. Association for Computational Linguistics. Budzianowski, P., Wen, T., Tseng, B., Casanueva, I., Ultes, S., Ramadan, O., and Gasic, M. (2018). Multiwoz A large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural \fBibliography 152 Language Processing, Brussels, Belgium, October 31 November 4, 2018, pages 5016\u20135026. Association for Computational Linguistics. Burges, C. J. C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N., and Hullender, G. N. (2005). Learning to rank using gradient descent. In Machine Learning, Proceedings of the Twenty-Second International Conference (ICML 2005), Bonn, Germany, August 7-11, 2005, pages 89\u201396. Cann, R., Kempson, R., and Marten, L. (2005). The Dynamics of Language. Elsevier, Oxford. Cao, K. and Clark, S. (2017). Latent variable dialogue models and their diversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 2: Short Papers, pages 182\u2013187. Association for Computational Linguistics. Chang, A. X. and Manning, C. D. (2012). Sutime: A library for recognizing and normalizing time expressions. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, LREC 2012, Istanbul, Turkey, May 23-25, 2012, pages 3735\u20133740. Charniak, E. and Johnson, M. (2001). Edit detection and parsing for transcribed speech. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1\u20139. Association for Computational Linguistics. Cho, K., van Merri\u00ebnboer, B., Bahdanau, D., and Bengio, Y. (2014a). On the properties of neural machine translation: Encoder\u2013decoder approaches. Syntax, Semantics and Structure in Statistical Translation, page 103. Cho, K., van Merrienboer, B., G\u00fcl\u00e7ehre, \u00c7., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014b). Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1724\u20131734. Colman, M. and Healey, P. G. T. (2011). The distribution of repair in dialogue. In Proceedings of the 33rd Annual Meeting of the Cognitive Science Society, pages 1563\u20131568, Boston, MA. Cooper, R. (2005). Records and record types in semantic theory. Journal of Logic and Computation, 15(2):99\u2013112. Cuay\u00e1huitl, H. (2016). Simpleds: A simple deep reinforcement learning dialogue system. In Dialogues with Social Robots Enablements, Analyses, and Evaluation, Seventh International Workshop on Spoken Dialogue Systems, IWSDS 2016, Saariselk\u00e4, Finland, January 13-16, 2016, volume 427 of Lecture Notes in Electrical Engineering, pages 109\u2013118. Springer. \fBibliography 153 Curry, A. C., Papaioannou, I., Suglia, A., Agarwal, S., Shalyminov, I., Xu, X., Dusek, O., Eshghi, A., Konstas, I., Rieser, V., and Lemon, O. (2018). Alana v2: Entertaining and informative open-domain social dialogue using ontologies and entity linking. In 2018 Alexa Prize Proceedings. Dahlb\u00e4ck, N., J\u00f6nsson, A., and Ahrenberg, L. (1998). Wizard of Oz Studies\u2014Why and How, page 610\u2013619. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. Danescu-Niculescu-Mizil, C. and Lee, L. (2011). Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011. Deng, J., Dong, W., Socher, R., Li, L., Li, K., and Li, F. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pages 248\u2013 255. IEEE Computer Society. Dethlefs, N. and Cuay\u00e1huitl, H. (2010). Hierarchical reinforcement learning for adaptive text generation. In INLG 2010 Proceedings of the Sixth International Natural Language Generation Conference, July 7-9, 2010, Trim, Co. Meath, Ireland. The Association for Computer Linguistics. Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171\u20134186. Association for Computational Linguistics. Dinarelli, M., Quarteroni, S., Tonelli, S., Moschitti, A., and Riccardi, G. (2009). Annotating spoken dialogs: From speech segments to dialog acts and frame semantics. In Proceedings of SRSL 2009, the 2nd Workshop on Semantic Representation of Spoken Language, pages 34\u201341, Athens, Greece. Association for Computational Linguistics. Duchi, J. C., Hazan, E., and Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121\u20132159. Dusek, O. and Jurcicek, F. (2015). Training a natural language generator from unaligned data. Proceedings of ACL. Dusek, O. and Jurc\u00edcek, F. (2016). Sequence-to-sequence generation for spoken dialogue via deep syntax trees and strings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 2: Short Papers. The Association for Computer Linguistics. \fBibliography 154 Dziri, N., Kamalloo, E., Mathewson, K. W., and Za\u00efane, O. R. (2019). Evaluating coherence in dialogue systems using entailment. In NAACLHLT. E, H., Niu, P., Chen, Z., and Song, M. (2019). A novel bi-directional interrelated model for joint intent detection and slot \ufb01lling. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28August 2, 2019, Volume 1: Long Papers, pages 5467\u20135471. Association for Computational Linguistics. Elman, J. L. (1990). Finding structure in time. Cogn. Sci., 14(2):179\u2013211. Eric, M., Goel, R., Paul, S., Sethi, A., Agarwal, S., Gao, S., and Hakkani-Tur, D. (2019). Multiwoz 2.1: Multi-domain dialogue state corrections and state tracking baselines. arXiv preprint arXiv:1907.01669. Eric, M., Krishnan, L., Charette, F., and Manning, C. D. (2017). Key-value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, Saarbr\u00fccken, Germany, August 15-17, 2017, pages 37\u201349. Eric, M. and Manning, C. D. (2017). A copy-augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. In Lapata, M., Blunsom, P., and Koller, A., editors, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 2: Short Papers, pages 468\u2013473. Association for Computational Linguistics. Eshghi, A. (2015). DS-TTR: An incremental, semantic, contextual parser for dialogue. In Proceedings of Semdial 2015 (goDial), the 19th workshop on the semantics and pragmatics of dialogue. Eshghi, A., Hough, J., and Purver, M. (2013a). Incremental grammar induction from childdirected dialogue utterances. In Proceedings of the 4th Annual Workshop on Cognitive Modeling and Computational Linguistics (CMCL), pages 94\u2013103, So\ufb01a, Bulgaria. Association for Computational Linguistics. Eshghi, A., Hough, J., Purver, M., Kempson, R., and Gregoromichelaki, E. (2012). Conversational interactions: Capturing dialogue dynamics. In From Quanti\ufb01cation to Conversation: Festschrift for Robin Cooper on the occasion of his 65th birthday, volume 19 of Tributes, pages 325\u2013349. College Publications, London. Eshghi, A., Howes, C., Gregoromichelaki, E., Hough, J., and Purver, M. (2015). Feedback in conversation as incremental semantic update. In Proceedings of the 11th International Conference on Computational Semantics (IWCS 2015), London, UK. Association for Computational Linguisitics. \fBibliography 155 Eshghi, A. and Lemon, O. (2014). How domain-general can we be? Learning incremental dialogue systems without dialogue acts. In Proceedings of Semdial 2014 (DialWatt). Eshghi, A., Purver, M., and Hough, J. (2011). Dylan: Parser for dynamic syntax. Technical report, Queen Mary University of London. Eshghi, A., Purver, M., Hough, J., and Sato, Y. (2013b). Probabilistic grammar induction in an incremental semantic framework. In CSLP, Lecture Notes in Computer Science. Springer. Eshghi, A., Shalyminov, I., and Lemon, O. (2017). Bootstrapping incremental dialogue systems from minimal data: the generalisation power of dialogue grammars. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2220\u20132230. Fedorenko, D., Smetanin, N., and Rodichev, A. (2018). Avoiding Echo-Responses in a Retrieval-Based Conversation System. In Arti\ufb01cial Intelligence and Natural Language. Fern\u00e1ndez, R. (2006). Non-Sentential Utterances in Dialogue: Classi\ufb01cation, Resolution and Use. PhD thesis, King\u2019s College London, University of London. Fern\u00e1ndez, R. and Ginzburg, J. (2002). Non-sentential utterances: Grammar and dialogue dynamics in corpus annotation. In Proceedings of the 19th International Conference on Computational Linguistics (COLING), pages 253\u2013259. Ferreira, F., Lau, E. F., and Bailey, K. G. D. (2004). Dis\ufb02uencies, language comprehension, and tree adjoining grammars. Cognitive Science, 28(5):721\u2013749. Finkel, J. R., Grenager, T., and Manning, C. D. (2005). Incorporating non-local information into information extraction systems by gibbs sampling. In ACL 2005, 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 25-30 June 2005, University of Michigan, USA, pages 363\u2013370. The Association for Computer Linguistics. Ganhotra, J. and Polymenakos, L. (2018). Knowledge-based end-to-end memory networks. CoRR, abs/1804.08204. Gao, J., Galley, M., and Li, L. (2019). Neural approaches to conversational ai. Foundations and Trends in Information Retrieval. Gilbert, C. J. and Hutto, E. (2014). VADER: A parsimonious rule-based model for sentiment analysis of social media text. In Eighth International AAAI Conference on Weblogs and Social Media, pages 216\u2013225, Ann Arbor, MI, USA. Ginzburg, J. (2012). The Interactive Stance: Meaning for Conversation. Oxford University Press. \fBibliography 156 Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning. MIT Press. http: //www.deeplearningbook.org. Gu, J., Wang, Y., Chen, Y., Li, V. O. K., and Cho, K. (2018). Meta-learning for low-resource neural machine translation. In EMNLP. Hall, P., Miller, H., et al. (2009). Using the bootstrap to quantify the authority of an empirical ranking. The Annals of Statistics, 37(6B):3929\u20133959. He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770\u2013778. IEEE Computer Society. Healey, P. G. T., Mills, G., Eshghi, A., and Howes, C. (2018). Running repairs: Coordinating meaning in dialogue. topiCS, 10(2):367\u2013388. Henderson, M., Thomson, B., and Williams, J. D. (2014a). The second dialog state tracking challenge. In Proceedings of the SIGDIAL 2014 Conference, The 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 18-20 June 2014, Philadelphia, PA, USA, pages 263\u2013272. Henderson, M., Thomson, B., and Williams, J. D. (2014b). The third dialog state tracking challenge. In 2014 IEEE Spoken Language Technology Workshop, SLT 2014, South Lake Tahoe, NV, USA, December 7-10, 2014, pages 324\u2013329. IEEE. Henderson, M., Thomson, B., and Young, S. (2014c). Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292\u2013299, Philadelphia, PA, U.S.A. Association for Computational Linguistics. Henderson, M., Thomson, B., and Young, S. J. (2014d). Robust dialog state tracking using delexicalised recurrent neural networks and unsupervised adaptation. In 2014 IEEE Spoken Language Technology Workshop, SLT 2014, South Lake Tahoe, NV, USA, December 7-10, 2014, pages 360\u2013365. IEEE. Henderson, M., Thomson, B., and Young, S. J. (2014e). Word-based dialog state tracking with recurrent neural networks. In Proceedings of the SIGDIAL 2014 Conference, The 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 18-20 June 2014, Philadelphia, PA, USA, pages 292\u2013299. Hill, F., Cho, K., and Korhonen, A. (2016). Learning distributed representations of sentences from unlabelled data. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 1367\u20131377. \fBibliography 157 Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8):1735\u20131780. Ho\ufb00man, M., Bach, F. R., and Blei, D. M. (2010). Online learning for latent dirichlet allocation. In Advances in Neural Information Processing Systems 23, pages 856\u2013864. Holtzman, A., Buys, J., Du, L., Forbes, M., and Choi, Y. (2020). The curious case of neural text degeneration. International Conference on Learning Representations. Hough, J. (2011). Incremental semantics driven natural language generation with self-repairing capability. In Recent Advances in Natural Language Processing (RANLP), pages 79\u201384, Hissar, Bulgaria. Hough, J. (2014). Modelling incremental self-repair processing in dialogue. PhD thesis, Queen Mary University of London, UK. Hough, J. and Purver, M. (2014a). Probabilistic type theory for incremental dialogue processing. In Proceedings of the EACL 2014 Workshop on Type Theory and Natural Language Semantics (TTNLS), pages 80\u201388, Gothenburg, Sweden. Association for Computational Linguistics. Hough, J. and Purver, M. (2014b). Strongly incremental repair detection. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 78\u201389. Hough, J. and Schlangen, D. (2015). Recurrent neural networks for incremental dis\ufb02uency detection. In INTERSPEECH 2015, 16th Annual Conference of the International Speech Communication Association, Dresden, Germany, September 6-10, 2015, pages 849\u2013853. Howard, J. and Ruder, S. (2018). Universal language model \ufb01ne-tuning for text classi\ufb01cation. In Gurevych, I. and Miyao, Y., editors, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 328\u2013339. Association for Computational Linguistics. Howes, C., Healey, P. G. T., and Purver, M. (2010). Tracking lexical and syntactic alignment in conversation. In Proceedings of the 32nd Annual Conference of the Cognitive Science Society, Portland, OR. Huang, P., He, X., Gao, J., Deng, L., Acero, A., and Heck, L. P. (2013). Learning deep structured semantic models for web search using clickthrough data. In 22nd ACM International Conference on Information and Knowledge Management, CIKM\u201913, San Francisco, CA, USA, October 27 November 1, 2013, pages 2333\u20132338. ACM. \fBibliography 158 Ippolito, D., Kriz, R., Sedoc, J., Kustikova, M., and Callison-Burch, C. (2019). Comparison of diverse decoding methods from conditional language models. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28August 2, 2019, Volume 1: Long Papers, pages 3752\u20133762. Association for Computational Linguistics. Ji, Z., Lu, Z., and Li, H. (2014). An Information Retrieval Approach to Short Text Conversation. CoRR, abs/1408.6988. Jia, R. and Liang, P. (2016). Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. Jiang, S. and de Rijke, M. (2018). Why are sequence-to-sequence models so dull? understanding the low-diversity problem of chatbots. In Proceedings of the 2nd International Workshop on Search-Oriented Conversational AI, SCAI@EMNLP 2018, Brussels, Belgium, October 31, 2018, pages 81\u201386. Association for Computational Linguistics. Johnson, J., Douze, M., and J\u00e9gou, H. (2017). Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734. Joshi, M., Chen, D., Liu, Y., Weld, D. S., Zettlemoyer, L., and Levy, O. (2020). Spanbert: Improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Linguistics, 8:64\u201377. Jurafsky, D. and Martin, J. H. (2019). Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition, 3rd edition draft. Kempson, R., Gregoromichelaki, E., Eshghi, A., and Hough, J. (2017). Ellipsis in Dynamic Syntax. In Oxford Handbook of Ellipsis. Oxford University Press. Kempson, R., Meyer-Viol, W., and Gabbay, D. (2001). Dynamic Syntax: The Flow of Language Understanding. Wiley-Blackwell. Kim, Y. (2014). Convolutional neural networks for sentence classi\ufb01cation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746\u20131751. Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. In Bengio, Y. and LeCun, Y., editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. \fBibliography 159 Koehn, P., Och, F. J., and Marcu, D. (2003). Statistical phrase-based translation. In Hearst, M. A. and Ostendorf, M., editors, Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL 2003, Edmonton, Canada, May 27 June 1, 2003. The Association for Computational Linguistics. Krause, B., Damonte, M., Dobre, M., Duma, D., Fainberg, J., Fancellu, F., Kahembwe, E., Cheng, J., and Webber, B. (2017). Edina: Building an Open Domain Socialbot with Selfdialogues. In 1st Proceedings of Alexa Prize, Las Vegas, NV, USA. arXiv: 1709.09816. Kumar, G., Henderson, M., Chan, S., Nguyen, H., and Ngoo, L. (2018). Question-answer selection in user to user marketplace conversations. In 9th International Workshop on Spoken Dialogue System Technology, IWSDS 2018, Singapore, April 18-20, 2018, volume 579 of Lecture Notes in Electrical Engineering, pages 397\u2013403. Springer. Kwiatkowski, T., Choi, E., Artzi, Y., and Zettlemoyer, L. (2013). Scaling semantic parsers with on-the-\ufb02y ontology matching. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). La\ufb00erty, J. D., McCallum, A., and Pereira, F. C. N. (2001). Conditional random \ufb01elds: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML \u201901, pages 282\u2013289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Lane, I., Kawahara, T., Matsui, T., and Nakamura, S. (2007). Out-of-domain utterance detection using classi\ufb01cation con\ufb01dences of multiple topics. IEEE Transactions on Audio, Speech, and Language Processing, 15(1):150\u2013161. LeCun, Y., Bottou, L., Bengio, Y., and Ha\ufb00ner, P. (1998). Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pages 2278\u20132324. Lee, S. (2017). Toward continual learning for conversational agents. CoRR, abs/1712.09943. Lee, S., Schulz, H., Atkinson, A., Gao, J., Suleman, K., El Asri, L., Adada, M., Huang, M., Sharma, S., Tay, W., and Li, X. (2019a). Multi-domain task-completion dialog challenge. In Dialog System Technology Challenges 8. Lee, S. and Shalyminov, I. (2019). Contextual out-of-domain utterance handling with counterfeit data augmentation. In ICASSP 2019 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7205\u20137209. Lee, S., Zhu, Q., Takanobu, R., Li, X., Zhang, Y., Zhang, Z., Li, J., Peng, B., Li, X., Huang, M., and Gao, J. (2019b). Convlab: Multi-domain end-to-end dialog system platform. In ACL. Lewis, M. and Steedman, M. (2013). Combined distributional and logical semantics. Transactions of the Association for Computational Linguistics, 1:179\u2013192. \fBibliography 160 Li, J., Galley, M., Brockett, C., Gao, J., and Dolan, B. (2016a). A diversity-promoting objective function for neural conversation models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 110\u2013119. The Association for Computational Linguistics. Li, J., Monroe, W., Ritter, A., Jurafsky, D., Galley, M., and Gao, J. (2016b). Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1192\u20131202. Li, J., Peng, B., Lee, S., Gao, J., Takanobu, R., Zhu, Q., Huang, M., Schulz, H., Atkinson, A., and Adada, M. (2020). Results of the multi-domain task-completion dialog challenge. In Proceedings of the 34th AAAI Conference on Arti\ufb01cial Intelligence, Eighth Dialog System Technology Challenge Workshop. Liang, W. and Yang, M. (2018). Hierarchical hybrid code networks for task-oriented dialogue. In Intelligent Computing Theories and Application, pages 194\u2013204. Springer International Publishing. Lison, P. and Tiedemann, J. (2016). Opensubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016, Portoro\u017e, Slovenia, May 23-28, 2016. European Language Resources Association (ELRA). Liu, H., Lin, T., Sun, H., Lin, W., Chang, C.-W., Zhong, T., and Rudnicky, A. (2017). RubyStar: A Non-Task-Oriented Mixture Model Dialog System. In 1st Proceedings of Alexa Prize, Las Vegas, NV, USA. arXiv: 1711.02781. Liu, T.-Y. (2011). Learning to Rank for Information Retrieval. Springer. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. Lowe, R., Noseworthy, M., Serban, I. V., Angelard-Gontier, N., Bengio, Y., and Pineau, J. (2017). Towards an automatic turing test: Learning to evaluate dialogue responses. In ACL. Lowe, R., Pow, N., Serban, I., and Pineau, J. (2015). The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of SIGDIAL, pages 285\u2013294. Lu, Y., Keung, P., Zhang, S., Sun, J., and Bhardwaj, V. (2017). A practical approach to dialogue response generation in closed domains. CoRR, abs/1703.09439. \fBibliography 161 Luo, L., Huang, W., Zeng, Q., Nie, Z., and Sun, X. (2019). Learning personalized end-toend goal-oriented dialog. In The Thirty-Third AAAI Conference on Arti\ufb01cial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Arti\ufb01cial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Arti\ufb01cial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 February 1, 2019, pages 6794\u20136801. AAAI Press. Luong, T., Pham, H., and Manning, C. D. (2015). E\ufb00ective approaches to attention-based neural machine translation. In M\u00e0rquez, L., Callison-Burch, C., Su, J., Pighin, D., and Marton, Y., editors, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1412\u20131421. The Association for Computational Linguistics. Ma, Y., Zeng, Z., Zhu, D., Li, X., Yang, Y., Yao, X., Zhou, K., and Shen, J. (2019). An endto-end dialogue state tracking system with machine reading comprehension and wide & deep classi\ufb01cation. CoRR, abs/1912.09297. Mairesse, F., Jur\u02c7 c\u00ed\u02c7 cek, F., Keizer, S., Thomson, B., Yu, K., and Young, S. (2009). Spoken language understanding from unaligned data using discriminative classi\ufb01cation models. In Proceedings of ICASSP. Manning, C. D., Raghavan, P., and Sch\u00fctze, H. (2008). Introduction to information retrieval. Cambridge University Press. Maystre, L. and Grossglauser, M. (2017). Just sort it! A simple and e\ufb00ective approach to active preference learning. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 2344\u20132353. PMLR. Merity, S., Keskar, N. S., and Socher, R. (2018). Regularizing and optimizing LSTM language models. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 May 3, 2018, Conference Track Proceedings. OpenReview.net. Merity, S., Xiong, C., Bradbury, J., and Socher, R. (2017). Pointer sentinel mixture models. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Merriam-Webster Dictionary (2020). Dialogue \u2014 Merriam-Webster.com dictionary. [Online; accessed 15-May-2020]. Meteer, M., Taylor, A., MacIntyre, R., and Iyer, R. (1995). Dis\ufb02uency annotation stylebook for the switchboard corpus. \fBibliography 162 Mikolov, T., Kara\ufb01\u00e1t, M., Burget, L., Cernock\u00fd, J., and Khudanpur, S. (2010). Recurrent neural network based language model. In Kobayashi, T., Hirose, K., and Nakamura, S., editors, INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010, pages 1045\u20131048. ISCA. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 3111\u20133119. Miller, A. H., Feng, W., Batra, D., Bordes, A., Fisch, A., Lu, J., Parikh, D., and Weston, J. (2017). Parlai: A dialog research software platform. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017 System Demonstrations, pages 79\u201384. Association for Computational Linguistics. Miller, A. H., Fisch, A., Dodge, J., Karimi, A., Bordes, A., and Weston, J. (2016). Key-value memory networks for directly reading documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1400\u20131409. The Association for Computational Linguistics. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. A. (2013). Playing atari with deep reinforcement learning. CoRR, abs/1312.5602. Mrksic, N., S\u00e9aghdha, D. \u00d3., Thomson, B., Gasic, M., Su, P., Vandyke, D., Wen, T., and Young, S. J. (2015). Multi-domain dialog state tracking using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 2: Short Papers, pages 794\u2013799. The Association for Computer Linguistics. Nair, V. and Hinton, G. E. (2010). Recti\ufb01ed linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pages 807\u2013814. Omnipress. Nakano, M., Sato, S., Komatani, K., Matsuyama, K., Funakoshi, K., and Okuno, H. G. (2011). A two-stage domain selection framework for extensible multi-domain spoken dialogue systems. In Proceedings of the SIGDIAL 2011 Conference, pages 18\u201329. Association for Computational Linguistics. Nalisnick, E. T., Matsukawa, A., Teh, Y. W., G\u00f6r\u00fcr, D., and Lakshminarayanan, B. (2019). Do deep generative models know what they don\u2019t know? In 7th International Conference \fBibliography 163 on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Novikova, J., Dusek, O., Cercas-Curry, A., and Rieser, V. (2017). Why we need new evaluation metrics for nlg. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP). Olah, C. and Carter, S. (2016). Attention and augmented recurrent neural networks. Distill. Oviatt, S. (1995). Predicting spoken dis\ufb02uencies during human\u2013computer interaction. Computer Speech & Language, 9(1):19\u201335. Papaioannou, I., Curry, A. C., Part, J. L., Shalyminov, I., Xu, X., Yu, Y., Dusek, O., Rieser, V., and Lemon, O. (2017). An ensemble model with ranking for social dialogue. In NIPS Workshop on Conversational AI. Pappu, A., Blanco, R., Mehdad, Y., Stent, A., and Thadani, K. (2017). Lightweight multilingual entity extraction and linking. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, WSDM 17, New York, NY, USA. ACM. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. (2017). Automatic di\ufb00erentiation in pytorch. In NIPS-W. Peckham, J. (1991). Speech understanding and dialogue over the telephone: an overview of the ESPRIT SUNDIAL project. In Speech and Natural Language: Proceedings of a Workshop Held at Paci\ufb01c Grove, California, February 19-22, 1991. Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12:1532\u20131543. Peters, M. E., Ammar, W., Bhagavatula, C., and Power, R. (2017). Semi-supervised sequence tagging with bidirectional language models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 August 4, Volume 1: Long Papers, pages 1756\u20131765. Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227\u20132237. Pickering, M. J., Clifton, C., and Crocker, M. W. (1999). Architectures and Mechanisms in Sentence Comprehension, page 1\u201328. Cambridge University Press. \fBibliography 164 Purver, M., Eshghi, A., and Hough, J. (2011). Incremental semantic construction in a dialogue system. In Proceedings of the 9th International Conference on Computational Semantics, pages 365\u2013369, Oxford, UK. Purver, M., Gregoromichelaki, E., Meyer-Viol, W., and Cann, R. (2010). Splitting the \u2018I\u2019s and crossing the \u2018You\u2019s: Context, speech acts and grammar. In Aspects of Semantics and Pragmatics of Dialogue. SemDial 2010, 14th Workshop on the Semantics and Pragmatics of Dialogue, pages 43\u201350, Pozna\u00b4 n. Polish Society for Cognitive Science. Purver, M., Hough, J., and Gregoromichelaki, E. (2014). Dialogue and compound contributions. In Natural Language Generation in Interactive Systems, pages 63\u201392. Cambridge University Press. Purver, M., Hough, J., and Howes, C. (2018). Computational models of miscommunication phenomena. In Topics in Cognitive Science (topiCS), volume 10. Qian, K. and Yu, Z. (2019). Domain adaptive dialog generation via meta learning. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28August 2, 2019, Volume 1: Long Papers, pages 2639\u20132649. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2018). Language models are unsupervised multitask learners. Ram, A., Prasad, R., Khatri, C., Venkatesh, A., Gabriel, R., Liu, Q., Nunn, J., Hedayatnia, B., Cheng, M., Nagar, A., King, E., Bland, K., Wartick, A., Pan, Y., Song, H., Jayadevan, S., Hwang, G., and Pettigrue, A. (2017). Conversational AI: The Science Behind the Alexa Prize. In 1st Proceedings of Alexa Prize, Las Vegas, NV, USA. arXiv: 1801.03604. Ramachandran, D. and Ratnaparkhi, A. (2015). Belief tracking with stacked relational trees. In Proceedings of the SIGDIAL 2015 Conference, The 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2-4 September 2015, Prague, Czech Republic, pages 68\u201376. The Association for Computer Linguistics. Rei, M. (2017). Semi-supervised multitask learning for sequence labeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 August 4, Volume 1: Long Papers, pages 2121\u20132130. Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, volume 32 of JMLR Workshop and Conference Proceedings, pages 1278\u20131286. JMLR.org. Rieser, V. and Lemon, O. (2009). Natural language generation as planning under uncertainty for spoken dialogue systems. Proceedings of ACL. \fBibliography 165 Rieser, V. and Lemon, O. (2011). Reinforcement Learning for Adaptive Dialogue Systems A Data-driven Methodology for Dialogue Management and Natural Language Generation. Theory and Applications of Natural Language Processing. Springer. Rojas-Barahona, L. M., Gasic, M., Mrksic, N., Su, P., Ultes, S., Wen, T., Young, S. J., and Vandyke, D. (2017). A network-based end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 438\u2013449. Rudnicky, A. I., Thayer, E. H., Constantinides, P. C., Tchou, C., Shern, R., Lenzo, K. A., Xu, W., and Oh, A. (1999). Creating natural dialogs in the carnegie mellon communicator system. In Sixth European Conference on Speech Communication and Technology, EUROSPEECH 1999, Budapest, Hungary, September 5-9, 1999. ISCA. Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1988). Learning Representations by BackPropagating Errors, page 696\u2013699. MIT Press, Cambridge, MA, USA. Ryu, S., Kim, S., Choi, J., Yu, H., and Lee, G. G. (2017). Neural sentence embedding using only in-domain sentences for out-of-domain sentence detection in dialog systems. Pattern Recognition Letters, 88:26\u201332. Sanh, V., Debut, L., Chaumond, J., and Wolf, T. (2019). Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108. Schapire, R. and Singer, Y. (2000). Boostexter: A boosting-based system for text categorization. Machine Learning ML, 39:135\u2013168. Schlangen, D. and Hough, J. (2017). Joint, incremental dis\ufb02uency detection and utterance segmentation from speech. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 326\u2013336. Schlangen, D. and Skantze, G. (2009). A general, abstract model of incremental dialogue processing. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 710\u2013718, Athens, Greece. Association for Computational Linguistics. See, A., Liu, P. J., and Manning, C. D. (2017). Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 August 4, Volume 1: Long Papers, pages 1073\u20131083. Association for Computational Linguistics. Serban, I. V., Lowe, R., Henderson, P., Charlin, L., and Pineau, J. (2018). A survey of available corpora for building data-driven dialogue systems: The journal version. Dialogue Discourse, 9(1):1\u201349. \fBibliography 166 Serban, I. V., Sankar, C., Germain, M., Zhang, S., Lin, Z., Subramanian, S., Kim, T., Pieper, M., Chandar, S., Ke, N. R., Mudumba, S., de Br\u00e9bisson, A., Sotelo, J., Suhubdy, D., Michalski, V., Nguyen, A., Pineau, J., and Bengio, Y. (2017a). A deep reinforcement learning chatbot. NIPS Workshop on Conversational AI, abs/1709.02349. Serban, I. V., Sordoni, A., Bengio, Y., Courville, A., and Pineau, J. (2016). Building end-toend dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Arti\ufb01cial Intelligence, AAAI\u201916, pages 3776\u20133783. AAAI Press. Serban, I. V., Sordoni, A., Bengio, Y., Courville, A. C., and Pineau, J. (2015). Hierarchical neural network generative models for movie dialogues. CoRR, abs/1507.04808. Serban, I. V., Sordoni, A., Lowe, R., Charlin, L., Pineau, J., Courville, A. C., and Bengio, Y. (2017b). A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of the Thirty-First AAAI Conference on Arti\ufb01cial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 3295\u20133301. AAAI Press. Sha, F. and Pereira, F. (2003). Shallow parsing with conditional random \ufb01elds. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology Volume 1, NAACL \u201903, page 134\u2013141, USA. Association for Computational Linguistics. Shah, P., Hakkani-T\u00fcr, D., T\u00fcr, G., Rastogi, A., Bapna, A., Nayak, N., and Heck, L. (2018). Building a conversational agent overnight with dialogue self-play. arXiv preprint arXiv:1801.04871. Shalyminov, I., Dusek, O., and Lemon, O. (2018a). Neural response ranking for social conversation: A data-e\ufb03cient approach. In Proceedings of the 2nd International Workshop on Search-Oriented Conversational AI, SCAI@EMNLP 2018, Brussels, Belgium, October 31, 2018, pages 1\u20138. Shalyminov, I., Eshghi, A., and Lemon, O. (2017). Challenging Neural Dialogue Models with Natural Data: Memory Networks Fail on Incremental Phenomena. In Proceedings of the 21st Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2017 SaarDial). Shalyminov, I., Eshghi, A., and Lemon, O. (2018b). Multi-Task Learning for Domain-General Spoken Dis\ufb02uency Detection in Dialogue Systems. In Proceedings of the 22st Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2018 AixDial). Shalyminov, I. and Lee, S. (2018). Improving robustness of neural dialog systems in a datae\ufb03cient way with turn dropout. The Second NeurIPS Workshop on Conversational AI: \u201cToday\u2019s Practice and Tomorrow\u2019s Potential\u201d, abs/1811.12148. \fBibliography 167 Shalyminov, I., Lee, S., Eshghi, A., and Lemon, O. (2019a). Data-e\ufb03cient goal-oriented conversation with dialogue knowledge transfer networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1741\u20131751. Association for Computational Linguistics. Shalyminov, I., Lee, S., Eshghi, A., and Lemon, O. (2019b). Few-shot dialogue generation without annotated data: A transfer learning approach. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 32\u201339, Stockholm, Sweden. Association for Computational Linguistics. Shalyminov, I., Sordoni, A., Atkinson, A., and Schulz, H. (2020). Fast domain adaptation for goal-oriented dialogue using a hybrid generative-retrieval transformer. In ICASSP 2020 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8039\u20138043. Shalyminov, I., Sordoni, A., Atkinson, A., and Schulz, H. (2020). Hybrid generativeretrieval transformers for dialogue domain adaptation. Dialog State Tracking Challenge 8, DSTC8AAAI 2020. Sharma, S., El Asri, L., Schulz, H., and Zumer, J. (2017). Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation. CoRR, abs/1706.09799. Shi, H., Ushio, T., Endo, M., Yamagami, K., and Horii, N. (2016). A multichannel convolutional neural network for cross-language dialog state tracking. In 2016 IEEE Spoken Language Technology Workshop, SLT 2016, San Diego, CA, USA, December 13-16, 2016, pages 559\u2013 564. Shi, W., Qian, K., Wang, X., and Yu, Z. (2019a). How to build user simulators to train rl-based dialog systems. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1990\u20132000. Association for Computational Linguistics. Shi, W., Zhao, T., and Yu, Z. (2019b). Unsupervised dialog structure learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 1797\u20131807. Association for Computational Linguistics. Shriberg, E. (1994). Preliminaries to a Theory of Speech Dis\ufb02uencies. PhD thesis, University of California, Berkeley. \fBibliography 168 Shukla, S., Liden, L., Shayandeh, S., Kamal, E., Li, J., Mazzola, M., Park, T., Peng, B., and Gao, J. (2020). Conversation learner A machine teaching tool for building dialog managers for task-oriented dialog systems. In \u00c7elikyilmaz, A. and Wen, T., editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020, Online, July 5-10, 2020, pages 343\u2013349. Association for Computational Linguistics. Simonyan, K. and Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Singh, S. P., Litman, D. J., Kearns, M. J., and Walker, M. A. (2002). Optimizing dialogue management with reinforcement learning: Experiments with the njfun system. J. Artif. Intell. Res., 16:105\u2013133. Skantze, G. and Hjalmarsson, A. (2010). Towards incremental speech generation in dialogue systems. In Proceedings of the SIGDIAL 2010 Conference, pages 1\u20138, Tokyo, Japan. Association for Computational Linguistics. Song, Y., Yan, R., Li, X., Zhao, D., and Zhang, M. (2016). Two are better than one: An ensemble of retrievaland generation-based dialog systems. CoRR, abs/1610.07149. Sordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J.-Y., Gao, J., and Dolan, B. (2015). A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196\u2013 205, Denver, Colorado. Association for Computational Linguistics. Su, P., Budzianowski, P., Ultes, S., Gasic, M., and Young, S. J. (2017). Sample-e\ufb03cient actorcritic reinforcement learning with supervised data for dialogue management. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, Saarbr\u00fccken, Germany, August 15-17, 2017, pages 147\u2013157. Association for Computational Linguistics. Su, P., Gasic, M., Mrksic, N., Rojas-Barahona, L. M., Ultes, S., Vandyke, D., Wen, T., and Young, S. J. (2016). Continuously learning neural dialogue management. CoRR, abs/1606.02689. Sukhbaatar, S., Szlam, A., Weston, J., and Fergus, R. (2015). End-to-end memory networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2440\u20132448. Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference \fBibliography 169 on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104\u20133112. Tan, M., Xiang, B., and Zhou, B. (2015). Lstm-based deep learning models for non-factoid answer selection. CoRR, abs/1511.04108. Tiedemann, J. (2009). News from OPUS A collection of multilingual parallel corpora with tools and interfaces. In Recent Advances in Natural Language Processing, volume V, pages 237\u2013248. John Benjamins, Amsterdam/Philadelphia, Borovets, Bulgaria. Tur, G. (2011). Spoken Language Understanding: Systems for Extracting Semantic Information from Speech. John Wiley and Sons. Tur, G., Deoras, A., and Hakkani-T\u00fcr, D. (2014). Detecting out-of-domain utterances addressed to a virtual personal assistant. In Fifteenth Annual Conference of the International Speech Communication Association. Turing, A. M. (1950). I.\u2014COMPUTING MACHINERY AND INTELLIGENCE. Mind, LIX(236):433\u2013460. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998\u20136008. Venkatesh, A., Khatri, C., Ram, A., Guo, F., Gabriel, R., Nagar, A., Prasad, R., Cheng, M., Hedayatnia, B., Metallinou, A., Goel, R., Yang, S., and Raju, A. (2017). On Evaluating and Comparing Conversational Agents. In NIPS 2017 Workshop on Conversational AI (ConvAI), Long Beach, CA, USA. arXiv: 1801.03625. Vinyals, O., Fortunato, M., and Jaitly, N. (2015). Pointer networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2692\u20132700. Vinyals, O. and Le, Q. V. (2015). A neural conversational model. ICML. Wang, J., Yu, L., Zhang, W., Gong, Y., Xu, Y., Wang, B., Zhang, P., and Zhang, D. (2017a). Irgan: A minimax game for unifying generative and discriminative information retrieval models. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR \u201917, pages 515\u2013524, New York, NY, USA. ACM. Wang, S., Che, W., and Liu, T. (2016). A neural attention model for dis\ufb02uency detection. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 278\u2013287. \fBibliography 170 Wang, S., Che, W., Zhang, Y., Zhang, M., and Liu, T. (2017b). Transition-based dis\ufb02uency detection using lstms. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2785\u20132794. Wang, Y.-Y., Deng, L., and Acero, A. (2005). Spoken language understanding. an introduction to the statistical framework. IEEE Signal Processing Magazine, 22(5):16\u201331. Ward, W. (1991). Understanding spontaneous speech: The phoenix system. In Proceedings of the Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 International Conference, ICASSP \u201991, page 365\u2013367, USA. IEEE Computer Society. Weizenbaum, J. (1966). Eliza—a computer program for the study of natural language communication between man and machine. Commun. ACM, 9(1):36\u201345. Wen, T., Gasic, M., Kim, D., Mrksic, N., Su, P., Vandyke, D., and Young, S. J. (2015a). Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. In Proceedings of the SIGDIAL 2015 Conference, The 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2-4 September 2015, Prague, Czech Republic, pages 275\u2013284. The Association for Computer Linguistics. Wen, T.-H., Ga\u0161i\u00b4 c, M., Mrk\u0161i\u00b4 c, N., Su, P.-H., Vandyke, D., and Young, S. (2015b). Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711\u20131721, Lisbon, Portugal. Association for Computational Linguistics. Weston, J., Chopra, S., and Bordes, A. (2015). Memory networks. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Weston, J., Dinan, E., and Miller, A. H. (2018). Retrieve and re\ufb01ne: Improved sequence generation models for dialogue. In Proceedings of the 2nd International Workshop on SearchOriented Conversational AI, SCAI@EMNLP 2018, Brussels, Belgium, October 31, 2018, pages 87\u201392. Association for Computational Linguistics. Williams, J., Kamal, E., Ashour, M., Amr, H., Miller, J., and Zweig, G. (2015). Fast and easy language understanding for dialog systems with microsoft language understanding intelligent service (luis). In Proceedings of 2015 SIGDIAL Conference, Prague. ACL Association for Computational Linguistics. Demonstration at SIGDIAL 2015. Williams, J. and Liden, L. (2017). Demonstration of interactive dialog teaching for learning a practical end-to-end dialog manager. In Proceedings of 2017 SIGDIAL Conference. Association for Computational Linguistics. \fBibliography 171 Williams, J. D., Asadi, K., and Zweig, G. (2017). Hybrid code networks: practical and e\ufb03cient end-to-end dialog control with supervised and reinforcement learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 August 4, Volume 1: Long Papers, pages 665\u2013677. Association for Computational Linguistics. Williams, J. D., Raux, A., and Henderson, M. (2016). The dialog state tracking challenge series: A review. D&D, 7(3):4\u201333. Williams, J. D., Raux, A., Ramachandran, D., and Black, A. W. (2013). The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, The 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 22-24 August 2013, SUPELEC, Metz, France, pages 404\u2013413. The Association for Computer Linguistics. Williams, J. D. and Young, S. J. (2007). Partially observable markov decision processes for spoken dialog systems. Comput. Speech Lang., 21(2):393\u2013422. Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn., 8:229\u2013256. Wolf, T., Sanh, V., Chaumond, J., and Delangue, C. (2019). Transfertransfo: A transfer learning approach for neural network based conversational agents. NeurIPS Workshop on Conversational AI. Wu, B., Wang, B., and Xue, H. (2016). Ranking responses oriented to conversational relevance in chat-bots. In Proceedings of COLING, pages 652\u2013662, Osaka, Japan. Yamada, K. and Knight, K. (2001). A syntax-based statistical translation model. In Association for Computational Linguistic, 39th Annual Meeting and 10th Conference of the European Chapter, Proceedings of the Conference, July 9-11, 2001, Toulouse, France, pages 523\u2013530. Morgan Kaufmann Publishers. Yan, R., Song, Y., and Wu, H. (2016). Learning to Respond with Deep Neural Networks for Retrieval-Based Human-Computer Conversation System. In Proceedings of SIGIR, pages 55\u201364, Pisa, Italy. Yang, Z., Yang, D., Dyer, C., He, X., Smola, A. J., and Hovy, E. H. (2016). Hierarchical attention networks for document classi\ufb01cation. In Knight, K., Nenkova, A., and Rambow, O., editors, NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 1480\u20131489. The Association for Computational Linguistics. Young, S. (2003). Hidden vector state model for hierarchical semantic parsing. volume 1, pages I\u2013268. \fBibliography 172 Young, S. (2009). Cued standard dialogue acts. Internal Report, Dialogue Systems Group, Cambridge University. Young, S., Gasic, M., Thomson, B., and Williams, J. D. (2013). Pomdp-based statistical spoken dialogue systems: a review. Proceedings of IEEE, 101:1160\u20131179. Young, S., Ga\u0161i\u00b4 c, M., Keizer, S., Mairesse, F., Schatzmann, J., Thomson, B., and Yu, K. (2010). The Hidden Information State model: A practical framework for POMDP-based spoken dialogue management. Computer Speech & Language, 24(2):150\u2013174. Young, S. J. (2010). Still talking to machines (cognitively speaking). In Kobayashi, T., Hirose, K., and Nakamura, S., editors, INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010, pages 1\u201310. ISCA. Yu, Z., Papangelis, A., and Rudnicky, A. (2015). TickTock: A Non-Goal-Oriented Multimodal Dialog System with Engagement Awareness. In Turn-Taking and Coordination in HumanMachine Interaction: Papers from the 2015 AAAI Spring Symposium, pages 108\u2013111, Palo Alto, CA, USA. Yu, Z., Xu, Z., Black, A. W., and Rudnicky, A. I. (2016). Strategy and Policy Learning for Non-Task-Oriented Conversational Systems. In Proc. SIGDIAL, Los Angeles, CA, USA. Zamani, H., Dehghani, M., Croft, W. B., Learned-Miller, E. G., and Kamps, J. (2018). From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 22-26, 2018, pages 497\u2013506. ACM. Zang, X., Rastogi, A., Sunkara, S., Gupta, R., Zhang, J., and Chen, J. (2020). Multiwoz 2.2: A dialogue dataset with additional annotation corrections and state tracking baselines. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, ACL 2020, pages 109\u2013117. Zhang, Y., Galley, M., Gao, J., Gan, Z., Li, X., Brockett, C., and Dolan, B. (2018). Generating informative and diverse conversational responses via adversarial information maximization. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montr\u00e9al, Canada, pages 1815\u20131825. Zhang, Y., Sun, S., Galley, M., Chen, Y.-C., Brockett, C., Gao, X., Gao, J., Liu, J., and Dolan, B. (2020). Dialogpt: Large-scale generative pre-training for conversational response generation. In ACL, system demonstration. \fBibliography 173 Zhao, S., Song, J., and Ermon, S. (2019). Infovae: Balancing learning and inference in variational autoencoders. In The Thirty-Third AAAI Conference on Arti\ufb01cial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Arti\ufb01cial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Arti\ufb01cial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 February 1, 2019, pages 5885\u20135892. AAAI Press. Zhao, T. and Esk\u00e9nazi, M. (2018). Zero-shot dialog generation with cross-domain latent actions. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, Melbourne, Australia, July 12-14, 2018, pages 1\u201310. Zhao, T., Lee, K., and Esk\u00e9nazi, M. (2018). Unsupervised discrete sentence representation learning for interpretable neural dialog generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 1520, 2018, Volume 1: Long Papers, pages 1098\u20131107. Zhao, T., Zhao, R., and Esk\u00e9nazi, M. (2017). Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 August 4, Volume 1: Long Papers, pages 654\u2013664. Zhou, L., Gao, J., Li, D., and Shum, H. (2020). The design and implementation of xiaoice, an empathetic social chatbot. Comput. Linguistics, 46(1):53\u201393. Zhuang, Y., Wang, X., Zhang, H., Xie, J., and Zhu, X. (2018). An Ensemble Approach to Conversation Generation. In Natural Language Processing and Chinese Computing, volume 10619, pages 51\u201362. Springer International Publishing. DOI: 10.1007/978-3-319-73618-1_5. Zilka, L. and Jurc\u00edcek, F. (2015). Incremental lstm-based dialog state tracker. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2015, Scottsdale, AZ, USA, December 13-17, 2015, pages 757\u2013762."
+ },
+ {
+ "url": "http://arxiv.org/abs/2003.01680v2",
+ "title": "Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation",
+ "abstract": "Domain adaptation has recently become a key problem in dialogue systems\nresearch. Deep learning, while being the preferred technique for modeling such\nsystems, works best given massive training data. However, in the real-world\nscenario, such resources aren't available for every new domain, so the ability\nto train with a few dialogue examples can be considered essential. Pre-training\non large data sources and adapting to the target data has become the standard\nmethod for few-shot problems within the deep learning framework. In this paper,\nwe present the winning entry at the fast domain adaptation task of DSTC8, a\nhybrid generative-retrieval model based on GPT-2 fine-tuned to the multi-domain\nMetaLWOz dataset. Robust and diverse in response generation, our model uses\nretrieval logic as a fallback, being SoTA on MetaLWOz in human evaluation (>4%\nimprovement over the 2nd place system) and attaining competitive generalization\nperformance in adaptation to the unseen MultiWOZ dataset.",
+ "authors": "Igor Shalyminov, Alessandro Sordoni, Adam Atkinson, Hannes Schulz",
+ "published": "2020-03-03",
+ "updated": "2020-03-06",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "I.2.7"
+ ],
+ "main_content": "Introduction Goal-oriented dialogue is an area of increasingly high interest, both from academic and industrial perspectives. Data-driven approaches to developing such systems (Lemon and Pietquin 2012) proved to be more \ufb02exible and scalable to various scenarios and domains than previous techniques widely employed in industry, mostly based on expert knowledge. The bene\ufb01ts of methods based on machine learning (especially deep learning) can only be experienced when there are excess amounts of training data available; however, in real-world scenarios, there\u2019s only a small amount of initial data available for a new domain. Training techniques must make the most of this small data, i.e. work in a data-ef\ufb01cient way, in order to enable rapid development of dialogue models for an everincreasing number of domains and tasks. The most promising method to achieve this under the deep learning framework has become transfer learning where a large, generic model is \ufb01rst trained from a highly represented source of data, after which it gets adapted to the target task. \u2217Work done during an internship at Microsoft Research Montr\u00b4 eal. Copyright c \u20dd2020, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. 1Code is publicly available at http://tiny.cc/grtr. In this paper, we explore this problem through the Eighth Dialogue System Technology Challenge (DSTC), Fast Domain Adaptation task. Speci\ufb01cally, we propose a hybrid generative/retrieval dialogue model leveraging knowledge transfer from a large-scale pre-trained general-purpose language model. Our model is able to maintain goal-oriented dialogue in a closed domain having only been exposed to a small set of in-domain dialogues as the domain description. Our hybrid model achieves state-of-the-art performance on the MetaLWOz dataset when evaluated with human judges, and attains competitive generalization level in adapting to goal-oriented MultiWOZ dataset unseen at the main training stage. Automated word overlap-based metrics demonstrate that it outperforms a series of competitive baselines\u2014both generative-only and retrieval-only models. Related work Generative dialogue modeling is an actively researched area, with the sequence-to-sequence (seq2seq) model (Vinyals and Le 2015) gaining wide adoption in both chat-oriented (Serban et al. 2016) and goal-oriented dialogue (Zhao et al. 2017). Initially these architectures were based on Recurrent Neural Networks such as LSTM (Hochreiter and Schmidhuber 1997) or GRU (Chung et al. 2014) which were quite challenging to train on large amounts on conversational data, causing researchers to focus on improving response diversity (Li et al. 2015) and the overall dialogue consistency (Li and Jurafsky 2016). Quite recently, self-attention mechanisms, like those used in the Transformer (Vaswani et al. 2017), have been adopted for conversation models\u2014together with large-scale pre-training, it resulted in a new generation of seq2seq architectures. The data ef\ufb01ciency of dialogue systems has also been extensively researched in the past. Initially, when modular dialogue system architecture was the prevalent approach, dialogue managers and state trackers were the components that data-ef\ufb01cient methods were applied to the most. As such, the dialogue state tracker domain adaptation task was initially proposed in DSTC-3 (Henderson, Thomson, and Williams 2014) \u2014 that challenge featured approaches like Bayesian Processes (Gasic et al. 2017) and Recurrent Neural Networks (Mrksic et al. 2015). Later research was focused on arXiv:2003.01680v2 [cs.CL] 6 Mar 2020 \fFigure 1: Model diagram: (a) encode target the dialogue context and (b) produce the \u2018generated candidate\u2019; (c) encode support dialogue contexts in a similar way; (d) \ufb01nd the nearest \u2018support\u2019 neighbor and select its response as the \u2018retrieved candidate\u2019; (e) \ufb01nally, rank the two candidates given the target context and produce the \ufb01nal result. data-ef\ufb01ciency of dialogue managers, for instance Williams, Asadi, and Zweig (2017) introduced a model designed for bootstrapping from limited training data and further \ufb01netuning in the reinforcement learning fashion. Furthermore, a recent paper by Vlasov, Drissner-Schmid, and Nichol (2018) proposed a dialogue management model which used a uni\ufb01ed embedding space for user and system turns allowing for ef\ufb01cient cross-domain knowledge transfer. End-to-end dialogue response generation, the technique that followed modular architectures with the arrival of large conversational datasets, was also eventually approached in a data-ef\ufb01cient way. One such method used prior linguistic knowledge to improve zero-shot performance: Eshghi, Shalyminov, and Lemon (2017) proposed a linguistically informed model based on an incremental semantic parser (Eshghi, Purver, and Hough 2011) combined with a reinforcement learning-based agent. The parser was used for both maintaining the agent\u2019s state and pruning the agent\u2019s incremental, word-level generation actions to those leading to syntactically correct word sequences. While outperforming end-to-end dialogue models on bAbI Dialog Tasks (Bordes, Boureau, and Weston 2017) in the extreme zero-shot case (Shalyminov, Eshghi, and Lemon 2017), this method inherited the limitations of the dialogue grammar \u2014 speci\ufb01cally, it is limited to a single closed domain until a wide-coverage grammar is available. Zhao and Esk\u00b4 enazi (2018) introduced zero-shot dialogue generation (ZSDG) framework under which a dialogue system was trained on dialogues from several source domains and a small amount of annotated utterances from the target domain. The key feature in their framework was the uni\ufb01ed latent space which was used to encode user\u2019s queries, dialogue contexts, and annotations. Later, Shalyminov et al. (2019b; 2019a) proposed Dialogue Knowledge Transfer Networks which approached the problem in a few-shot setup with a separate out-ofdomain pre-training stage on a large goal-oriented corpus (MetaLWOz, Lee et al. 2019a). In those approaches, MetaLWOz was used as source dataset for transfer, whereas we treat it as the target dataset. While the authors used full target-domain dialogues, they ended up using only a fraction of ZSDG\u2019s data in terms of the number of utterances. More generally, transfer learning has been widely adopted for natural language problems with the emergence of largescale pre-trained text representation models like ELMo (Peters et al. 2018), BERT (Devlin et al. 2019), and GPT-2 (Radford et al. 2018). When applied to dialogue response generation, the most successful approaches made use of a Transformer for chat-oriented dialogue (Wolf et al. 2019) and GPT/GPT-2 for goal-oriented dialogue (Budzianowski and Vulic 2019). Our approach is based on a similar technique, though in addition to \ufb01ne-tuning a pre-trained model to our task, we augment the generative model with a retrieval component in a hybrid approach. Finally, another recent approach applied to the problem of few-shot dialogue generation is meta-learning (Qian and Yu 2019), under which the task is split into multiple subtasks corresponding to dialogue domains. For each of them, a specialized dialogue model was trained, with their training progress then merged into the main model. In general, the intuition behind meta-learning is training a base model which would be best suited for data-ef\ufb01cient \ufb01ne-tuning \u2013 otherwise known as rapid adaptation \u2013 making the most ef\ufb01cient gradient updates from the few data points available in the target domain. Fast domain adaptation of a goal-oriented dialogue system Goal-oriented dialogue systems can be challenging to bootstrap: For a new domain, little data is available to train e.g. a natural language understanding (NLU) module or other parts \fGOLD TeamB TeamC TeamA BASELINE TeamD system 1 2 3 4 5 6 rank appropriate, n=100 GOLD TeamB TeamC TeamA BASELINE TeamD system informative, n=100 GOLD TeamB TeamC TeamA BASELINE TeamD system useful, n=100 GOLD TeamB TeamC TeamA BASELINE TeamD system easy_to_answer, n=100 Figure 2: Human evaluation: rank densities by metric with the sample size of 100 dialogues (lower numbers are better). Our submission is denoted as Team B. Densities are determined by drawing 1000 times with replacement from the 100 dialogues and recomputing the rank. Table 1: Ranking from judges\u2019 pairwise comparisons Rank Submission Win rate (%) 1 Gold response 62.32 2 Team B (ours) 56.85 3 Team C 52.07 4 Team A 47.35 5 Baseline 1 44.18 6 Team D 37.34 of the pipeline. Often, a Wizard-of-Oz (WOz, Kelley 1984, Rieser, Kruijff-Korbayov, and Lemon 2005) schema can be used to obtain some initial test data, however, this requires training human agents for the task and setting up a complex pipeline. The value of WOz data is limited, since \u201cusers\u201d are mostly hired and might not conform to real users. Additionally, any change in the chatbot interface requires collecting more data. In the context of the DSTC-8 domain adaptation challenge, we aim to build a model that predicts user responses for a goal-oriented dialogue system for which only limited indomain data is available. Such data could be collected from e.g. customer service transcripts, or written by the developers themselves. From this in-domain data, the support set, we would like to extrapolate responses to novel dialogue contexts (the target). However, the support set is typically too small to train a generative dialogue model. Instead, we adapt a generic dialogue model trained on a large corpus of conversations over multiple source domains. Technically, the problem setup is as follows: having trained the base model on the source domains, the model is then fed with one target dialogue and a support set at a time. The model\u2019s task is to predict the next user turn of the target dialogue, taking into account the support set before producing a prediction. At prediction time, each target dialogue is processed in isolation from other target dialogues, such that the model cannot use knowledge or state obtained from other target/support data. Proposed model We use a language model pre-trained on a very large and diverse collection of textual data providing a strong language prior and then adapt the model for our tasks in the form of \ufb01ne-tuning. Our base model is GPT-2 (Wolf et al. 2019), a transformer-based language model. In order to adapt GPT-2 for dialogue generation, we \ufb01rst augment the input embedding for each token in the dialogue with (1) a speaker tag embedding identifying the speaker and (2) a turn embedding, identifying the turn number in the current dialogue. These additional embedding matrices are learned solely using the dialogue data. The input token embeddings are then obtained by summing up these representations. We also add two task-speci\ufb01c output layers (or \u201cheads\u201d) for our purposes: a language modeling (LM) head and a next-sentence prediction (NSP) classi\ufb01cation head, both trained from randomly initialized parameters. We \ufb01ne-tune GPT-2 for response generation by minimizing the negative log-likelihood of response tokens given the concatenation of dialogue context and the previous tokens in the response, LLM = \u2212log PLM(X | C) = \u2212 |X| X i=1 log PLM(xi | xi\u22121, ..., x1, C), (1) where X is the response and C is the dialogue context, i.e. the concatenation of the tokens in the previous utterances. To predict the next sentence, we proceed as follows: given a context/response pair (C, X), the classi\ufb01cation head is trained to produce a binary label y, which is 1 if X is the correct response given the context C, and 0 if X is a distractor (a random utterance from the corpus). We minimize a binary cross-entropy: LNSP = \u2212y log PNSP(y | X, C) \u2212(1 \u2212y) log PNSP(1 \u2212y | X, C), (2) PNSP(y | X, C) = softmax(fNSP(hX,C)), (3) \fTable 2: Automatic evaluation results on MetaLWOz MetaLWOz pure task (%) MetaLWOz cross task (%) BLEU1 BLEU2 BLEU3 CIDEr METEOR ROUGE-L BLEU1 BLEU2 BLEU3 CIDEr METEOR ROUGE-L Retrieval BERT 7.93 4.43 2.87 12.56 7.38 6.91 5.35 2.16 1.05 4.98 4.56 4.52 Retrieval SP+FT 9.57 5.37 3.45 14.32 6.98 7.19 5.94 2.25 0.93 4.69 4.29 4.53 HRED 8.66 3.86 2.11 13.73 6.02 7.75 8.94 3.87 2.02 12.65 6.05 7.55 GPT-2 \u2013sup1 8.20 3.95 2.22 16.41 6.10 8.34 8.37 3.80 2.05 15.60 6.17 8.55 GPT-2 \u2013ret2 11.33 6.45 4.17 23.38 8.23 10.74 10.21 5.26 2.95 18.06 7.06 9.59 GPT-2 hybrid 12.73 7.43 4.88 28.74 9.18 11.77 10.39 5.31 2.95 18.26 7.10 9.27 1 does not use support set. 2 \ufb01ne-tuned to support set, but does not use retrieval logic where hX,C is the last hidden state of the last GPT-2 layer after having encoded the concatenation of X and C and fNSP is the next-sentence prediction head (in our case a simple linear transformation). In practice, for each (C, X) pair in the corpus, we sample 1 distractor \u00af X. We obtain a suitable dialogue prior by \ufb01ne-tuning the modi\ufb01ed GPT-2 model on the source domains with both the language modeling and next-sentence prediction tasks as described above, therefore minimizing L = LNSP + LLM. Fine-tuning on target domains and prediction As every test dialogue in the target domain/task is accompanied with a small support set of dialogues from the same domain/task, we make use of this data by further \ufb01ne-tuning the dialogue model on the support dialogues. Crucially, we make sure not to accumulate any information between test dialogues: after each \ufb01ne-tuning on the support set, we reset the weights of the model to the dialogue prior obtained by the \ufb01ne-tuning stage described in the previous section. In order to add diversity to the responses, GPT-2 uses nucleus (top-p) sampling (Holtzman et al. 2019) during generation, i.e. the model\u2019s vocabulary V is pruned into V p, the smallest set such that X x\u2208V p p(x | x1:i\u22121, C) \u2265p, (4) and the \ufb01nal distribution from which the words are sampled is rescaled as follows: P \u2032(x | x1:i\u22121) = ( P (x|x1:i\u22121,C) P x\u2208V p P (x|x1:i\u22121,C) if x \u2208V (p) 0, otherwise. (5) Hybrid generative-retrieval prediction In our experiments, we found that retrieval baselines are quite effective in the automatic metrics considered. Combining retrieval techniques with our generative model in a hybrid approach produced a stronger model. The retrieval component is set up as follows: when predicting the t-th turn of the test dialogue, the model embeds its context of length t \u22121 as well as all the support dialogue contexts of the same length t \u22121 using the \ufb01ne-tuned dialogue encoder. The encoding for the dialogue context is the hidden state of the last layer of the Transformer model at the position corresponding to the last token in the context. Then, Table 3: Automatic evaluation results on MultiWOZ pure task dataset Intent F1 (%) Intent+Slots F1 (%) Retrieval BERT 48.00 21.95 Retrieval SP+FT 51.53 26.58 HRED 44.61 35.57 GPT-2 \u2013sup1 58.54 43.07 GPT-2 \u2013ret2 48.00 37.24 Team D 54.98 42.34 Team C 61.40 41.87 GPT-2 hybrid (Team B) 64.50 48.33 Team A 78.70 60.00 it selects the nearest support context to the target context and picks its t-th turn as the retrieved candidate response. Finally, the model\u2019s own generated response and the best retrieved candidate response are ranked using the NSP classi\ufb01cation head, i.e. both responses are concatenated with the ground-truth context and the one with the higher PNSP (Eq. 3) is selected. The model is visualized in Figure 1. Baselines We compare our hybrid model to the retrieval baselines provided by the DSTC-8 organizers. The baselines ignore the training data and rely solely on the support sets: they embed each support dialogue\u2019s context and \ufb01nd the one nearest to the target context using cosine distance as the metric. They then return the turn following the identi\ufb01ed context as the predicted response. There are two baselines, which differ in their encoder: (1) BERT (Devlin et al. 2019)-based, taken off-the-shelf, and (2) SentencePiece/FastText-based, modeled after Gu et al. (2018) with embeddings pre-trained on the Reddit Conversations corpus. We also compare our model to a bidirectional LSTM-based HRED (Serban et al. 2016) trained on MetaLWOz. Given the time constraints, we could only evaluate a base model without \ufb01ne-tuning to support sets. \fDatasets We use MetaLWOz, the dataset for DSTC-8 Track 2 \u201cFast Domain Adaptation\u201d (Lee et al. 2019a). It contains more than 37,000 human-human dialogues spanning the total of 227 tasks in 47 domains. The dialogues are collected in a Wizardof-Oz style: human participants were assigned the role of bot or user, then given a problem domain and related speci\ufb01c task, and instructed to reach the user\u2019s goal over at least 10 dialogue turns. For evaluation purposes, we additionally use MultiW OZ (Budzianowski et al. 2018), another multi-domain, multitask dialogue dataset. Dialogues in MultiWOZ contain NLU annotations, particularly for intent and slots, which we use in order to to evaluate the systems\u2019 goal-oriented performance. A subset of MultiWOZ (MultiWOZ pure task), where dialogues only pertain to a single domain, was used for evaluation. Experimental setup and evaluation We perform training in two stages: training of the base model and \ufb01ne-tuning it to the target dialogue\u2019s support set. At the \ufb01rst stage, we train the model for the maximum of 5 epochs with early stopping. The second stage goes on for 1 epoch in the interest of time. GPT-2 models use the context of 3 exchanges, or 5 turns: bot-user-bot-user-bot, predicting the next user\u2019s utterance. We mainly used the \u2018small\u2019 GPT-2 checkpoint by HuggingFace \u2014we also tried the \u2018medium\u2019 one, but found no improvement with it in our task. Human evaluation The main systems\u2019 goal is to generate appropriate responses towards maintaining a natural cooperative dialogue on the user\u2019s side, so the main evaluation is performed involving human judges. Speci\ufb01cally, Amazon Mechanical Turk workers were tasked to compare the candidate responses given the dialogue context. Each comparison was pairwise between the results of two systems presented in random order. Judges ranked the responses against the following criteria: \u2022 Usefulness \u2014 whether the response is useful given the dialogue context and the user\u2019s overall \ufb01nal goal, \u2022 Informativeness \u2014 whether the response speci\ufb01cally contains information relevant to the conversation, \u2022 Appropriateness \u2014 whether the response is appropriate (on-topic, of a reasonable length, not repetitive) to the conversation, \u2022 Easiness to answer \u2014 given a hypothetical conversational bot on the system side, whether the response will be a valid input for it and presumably straightforward to process. For each pairing, 3 independent comparisons were performed against each metric. The number of comparisons required was reduced by letting the Multisort algorithm (Maystre and Grossglauser 2017) determine which responses to compare, causing more similar systems with similar performance to be compared more often with each other. Bootstrapping over the 100 randomly chosen dialogue contexts was used to determine average ranks and assess the ranking robustness (Hall, Miller, and others 2009). Automatic evaluation In addition to human evaluation, we also assess model performance using automatic metrics. The models were evaluated on MetaLWOz against word-overlap metrics such as BLEU1\u20133, CIDEr, METEOR, ROUGE-L using the NLGEval package (Sharma et al. 2017). Although not ideal for the speci\ufb01cs of dialogue and spoken language in general (Lowe et al. 2017; Dziri et al. 2019), such metrics approximate the overall quality of a generative model and are especially useful for intermediate evaluation. We evaluate models in two modes on MetaLWOz: in pure task, support dialogues are drawn from the same domain and task as target dialogue; in cross-task, support and target dialogues are from the same domain, but different tasks. We also perform additional evaluation of Entity/Intent F1 of the MultiWOZ dataset in pure task mode with pretrained NLU taggers from the ConvLab package (Lee et al. 2019b). There is no MultiWOZ data available at the \ufb01rst stage (base model training), so all the exposure our model has to this dataset is via support dialogues. Complementary to MetaLWOz evaluation, this stage is designed for assessing the models\u2019 goal-oriented performance. Results and discussion Human evaluation Results of pairwise comparisons are shown in Table 1. Our GPT-2 hybrid system\u2019s responses (Team B) were preferred by the judges in 56% of direct comparisons. This surpasses the next best system (Team C) performance by more than 4%, with only the gold human responses being chosen more frequently. Furthermore, from the bootstrap ranking distribution (Figure 2), we see that, apart from the gold human responses, our model\u2019s outputs are consistently preferred over other submissions by the judges. Of all metrics used, the most notable are \u2018appropriateness\u2019 and \u2018usefulness\u2019. On the former, GPT2 hybrid\u2019s responses have the second visible peak at rank 1 competing with gold responses. On usefulness however, rank 1 is held by the gold responses with no variation, and our model has the second visible peak at rank 3, thus almost tying with Team C. Automatic evaluation Results on MetaLWOz and MultiWOZ against automatic evaluation metrics are shown in Tables 2 and 3, respectively. We observe that retrieval baselines attain very competitive performance on both datasets, with FastText embeddings from Reddit leading to overall better results than off-the-shelf BERT, especially in the pure task setting. With GPT-2, we performed an ablation study to have a closer look into its performance. We evaluated three versions: \u2018hybrid\u2019 which we presented in this paper, \u2018\u2013ret\u2019 with retrieval logic turned off, and \u2018\u2013sup\u2019 with no retrieval logic and no \ufb01ne-tuning to the support set. As seen in the Table 2, there is strong dependence on support dialogues (\u2018\u2013sup\u2019 vs. \u2018\u2013ret\u2019) as the base model mostly struggles to compete with the baselines. Adding retrieval logic (\u2018hybrid\u2019 vs. \u2018\u2013ret\u2019) results in further performance gains. HRED and GPT-2\u2013sup, the two \fTable 4: GPT-2 Hybrid example responses Context Wiz Hello how may I help you? Usr I need some advice about Rome. Wiz What do you want to know? Gold response Usr I like history, where should I go? Predicted response Usr Who was the founder of Rome? Context Wiz Hello, how may I help you? Usr I\u2019m looking for a train to broxbourne. Wiz Where would you like to leave from, and what day and time would you like to travel? Usr I\u2019m going to be leaving from Cambridge. I need to leave on Tuesday by 13:00. Wiz Is there a time you would like to get there by? Gold response Usr I don\u2019t have an arrival preference. Predicted response Usr No I don\u2019t have a leave time preference. Just not too early if I don\u2019t have too. Context Wiz Hello how may I help you? Usr I\u2019d like to book a \ufb02ight. Wiz Okay, any particular place in mind? Usr How am I supposed to book a \ufb02ight to Greece? Me and my kawaii girlfriend were wondering. She says \u201cOoooo, Greece. :3\u201d Wiz I can book a \ufb02ight for two if you want, There will be an evening \ufb02ight Usr Hmm, wait, I don\u2019t really wanna book a \ufb02ight. We were just curious! She says \u201cHey! No bookies! :(\u201d Wiz Oh, I was confused by that. Can I have your email address so I could send several \ufb02ight options for you Gold response Usr Sure thing! My email address is \u201cweeabooking@otaku.corn\u201d. She says \u201cI wanna watch my anime now! ; ;\u201d Predicted response Usr Well, I guess I\u2019ll just get back to you. Thanks! Table 5: GPT-2-hybrid generate/retrieve response ratio Dataset / domain Generated (%) Retrieved (%) MetaLWOz pure task booking \ufb02ight 64.10 35.90 hotel reserve 63.80 36.20 tourism 57.40 42.60 vacation ideas 61.70 38.30 MetaLWOz cross task booking \ufb02ight 68.20 31.80 hotel reserve 74.80 25.20 tourism 73.90 26.10 vacation ideas 74.70 25.30 MultiWOZ attraction 55.60 44.40 hospital 60.00 40.00 hotel 63.00 37.00 police 52.10 47.90 restaurant 61.30 38.70 taxi 64.30 35.70 train 61.00 39.00 models that did not use support dialogues, had comparable performance on MetaLWOz. In goal-oriented metrics on MultiWOZ (see Table 3), the same performance pattern is observed with retrieval models, but GPT-2 in the generative-only version performs surprisingly better when not \ufb01ne-tuned to support set (\u2018\u2013sup\u2019). On the other hand, the hybrid model experiences even more performance gain than on MetaLWOz. Presumably, generating responses for this dataset is harder due to the fact that it is not represented at the main training stage, and there is not much utterance overlap with MetaLWOz, so little knowledge transfer takes place in this experiment. Compared to other submissions, we observe that GPT-2 hybrid still outperforms most of the competitors and only gives way to Team A\u2019s system. We hypothesize here the best MultiWOZ model (Team A) was \ufb01tted to the automatic evaluation metrics too tightly, with the negative side effect observable in human evaluation results of Table 1 and Figure 2, where this system was prevalently ranked 4th and 5th. Retrieval and Generation Frequency In Table 5, we show per-domain ratios of retrieved/generated responses from the hybrid model. We \ufb01nd that the majority of the responses are generated, and the retrieval logic works as the fallback option. On MetaLWOz, which the model had more exposure to during the training, generated responses ratio is generally slightly higher than that on MultiWOZ which was only seen by the model via support dialogues. Consequently, the model\u2019s overall con\ufb01dence on this dataset is lower, which results in more frequent fallbacks. Overall, we observe in Table 4 that there are many cases in the data where the gold response cannot possibly be inferred from the dialogue context. Speci\ufb01cally, the task was posed in the way that no extra data, such as a knowledge base or task description, was provided to the system \u2014 therefore, the main goal intended for the hypothetical ideal system is to naturally model human responses in a co-operative goaloriented dialogue, and to do that in a data-ef\ufb01cient way. This is re\ufb02ected in the way human judges are asked about response quality. Conclusion and future work We presented a hybrid generative/retrieval approach to goaloriented dialogue with fast domain adaptation via transfer learning. It attains robust and diverse language generation performance across domains, and uses retrieval logic as a \ffallback mechanism in cases of low con\ufb01dence. Our method is the winning entry at the DSTC-8 Fast Domain Adaptation task achieving state-of-the-art performance as evaluated with human judges. In additional automatic evaluation, it attains competitive generalization performance in adaptation to the goal-oriented MultiWOZ dataset without any exposure to that data during the main training stage. Overall, we observe that transfer learning, while being in the core of state-of-the-art methods for dialogue domain adaptation and few-shot learning (Shalyminov et al. 2019a; Shalyminov et al. 2019b), still does not attain the performance level suf\ufb01cient for direct adoption in industry. It\u2019s evident that the problem of data-ef\ufb01cient dialogue response generation needs further research, and one promising direction that we are going to explore in our own future work is the meta-learning framework (Qian and Yu 2019), or \u2018learning to \ufb01ne-tune\u2019. Based on splitting the task into multiple subtasks and solving them with separate versions of the model with further merging of each individual learner\u2019s progress, metalearning approach will naturally \ufb01t our multi-domain setup as well as lead to potentially better \ufb01ne-tuning performance."
+ },
+ {
+ "url": "http://arxiv.org/abs/1910.01302v1",
+ "title": "Data-Efficient Goal-Oriented Conversation with Dialogue Knowledge Transfer Networks",
+ "abstract": "Goal-oriented dialogue systems are now being widely adopted in industry where\nit is of key importance to maintain a rapid prototyping cycle for new products\nand domains. Data-driven dialogue system development has to be adapted to meet\nthis requirement --- therefore, reducing the amount of data and annotations\nnecessary for training such systems is a central research problem.\n In this paper, we present the Dialogue Knowledge Transfer Network (DiKTNet),\na state-of-the-art approach to goal-oriented dialogue generation which only\nuses a few example dialogues (i.e. few-shot learning), none of which has to be\nannotated. We achieve this by performing a 2-stage training. Firstly, we\nperform unsupervised dialogue representation pre-training on a large source of\ngoal-oriented dialogues in multiple domains, the MetaLWOz corpus. Secondly, at\nthe transfer stage, we train DiKTNet using this representation together with 2\nother textual knowledge sources with different levels of generality: ELMo\nencoder and the main dataset's source domains.\n Our main dataset is the Stanford Multi-Domain dialogue corpus. We evaluate\nour model on it in terms of BLEU and Entity F1 scores, and show that our\napproach significantly and consistently improves upon a series of baseline\nmodels as well as over the previous state-of-the-art dialogue generation model,\nZSDG. The improvement upon the latter --- up to 10% in Entity F1 and the\naverage of 3% in BLEU score --- is achieved using only the equivalent of 10% of\nZSDG's in-domain training data.",
+ "authors": "Igor Shalyminov, Sungjin Lee, Arash Eshghi, Oliver Lemon",
+ "published": "2019-10-03",
+ "updated": "2019-10-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "I.2.7"
+ ],
+ "main_content": "Introduction Machine learning-based dialogue systems, while still being a relatively new research direction, are experiencing increasingly wide adoption in industry. Large-scale dialogue assistant platforms such as Google Assistant, Amazon Alexa, and Apple Siri provide a uni\ufb01ed conversational user interface (CUI) for third-party applications and services. Furthermore, products like Google Dialog\ufb02ow, Wit.ai, Microsoft LUIS, and Rasa offer means for rapid development of a dialogue system\u2019s core modules. In addition, with the recently adopted technique of training dialogue systems end-to-end data-ef\ufb01ciency of such systems becomes the key question in their adoption in practical applications. Currently, while being extremely \ufb02exible and requiring little to no programming of in-domain business logic (see e.g. Ultes et al. (2018); Wen et al. (2017); Rojas-Barahona et al. (2017)), such systems have too high data consumption \u2014 including both collection and annotation effort \u2014 in order for them to be used in rapidly paced industrial product cycles. Therefore, approaches to training such systems with extremely limited data (i.e. zero-, oneand few-shot training) are a priority research direction in the dialogue systems area. In this paper, we present the Dialogue Knowledge Transfer Network (or DiKTNet), a generative goal-oriented dialogue model designed for fewshot learning, i.e. training only using a small number of complete in-domain dialogues. The key underlying concept of this model is transfer learning: DiKTNet makes use of the latent text representation learned from several sources ranging from large-scale general-purpose textual corpora to similar dialogues in the domains different to the target one. We use the evaluation framework of Zhao and Esk\u00b4 enazi (2018) and the same dataset, and mainly compare our approach to theirs. While their method doesn\u2019t require complete in-domain dialogues and uses annotated utterances instead (and is therefore described as \u201czero-shot\u201d), we show that our model achieves superior performance with roughly the same amount of data (with arXiv:1910.01302v1 [cs.CL] 3 Oct 2019 \frespect to in-domain utterances) while requiring no annotations whatsoever. Figure 1: DI-VAE and DI-VST (DiKTNet Stage 1) 2 Related Work The problem of data ef\ufb01ciency of dialogue systems has been extensively researched in the past. Starting with domain adaptation of a dialogue state tracker (Henderson et al., 2014) approached using Bayesian Processes (Gasic et al., 2017) and Recurrent Neural Networks (Mrksic et al., 2015), there has been signi\ufb01cant work on training different dialogue system components using as little data as possible. As such, Williams et al. (2017) introduced a dialogue management model designed for bootstrapping from limited training data and further \ufb01ne-tuning. A recent paper by Vlasov et al. (2018) introduced a dialogue management model which uses a uni\ufb01ed embedding space for user and system turns allowing ef\ufb01cient cross-domain knowledge transfer. There also exist approaches to end-to-end dialogue generation. Eshghi et al. (2017) proposed a linguistically informed model based on an incremental semantic parser (Eshghi et al., 2011) combined with a reinforcement learning-based agent. The parser was used for both maintaining the agent\u2019s state and pruning the agent\u2019s incremental, word-level generation actions (only the actions leading to syntactically correct word sequences were allowed for the agent to take). While outperforming end-to-end dialogue models on bAbI Dialog Tasks in a zero-shot setup (Shalyminov et al., 2017) due to its prior linguistic knowledge in the form of a dialogue grammar, this method inherited the limitations of it as well. Speci\ufb01cally, it\u2019s limited to a single domain until a wide-coverage grammar is available. Meta-learning has also gained a lot of attention as a way to train models for maximally ef\ufb01cient adaptation to new data. As such, Qian and Yu (2019) presented such approach for fast adaptation of a dialogue model to a new domain. While highly promising, its main result was achieved on a synthetic dataset and would ideally need more testing on real data. Finally, the method we directly compare our approach to is that of Zhao and Esk\u00b4 enazi (2018) who introduced the Zero-Shot Dialogue Generation (ZSDG) task and the corresponding model. In their work, they use a uni\ufb01ed latent space for user utterances, system turns, and domain descriptions in the form of utterance-annotation pairs. Since they only used such utterances and no full dialogues for the target domain, they presented this approach as \u201czero-shot\u201d learning. In our approach, we do use complete in-domain dialogues, but with signi\ufb01cantly less data with respect to the number of in-domain utterances. Moreover, our method requires no annotation whatsoever. Recent research in Natural Language Processing has shown that the transfer of text representation learned on larger data sources bene\ufb01ts target models\u2019 performance, just as was the case with ImageNet-based computer vision models (Deng et al., 2009). For text, the main means for transfer was Word2Vec and GloVe embeddings (Mikolov et al., 2013; Pennington et al., 2014) recently extended with context-aware models like ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018). Trained on large and diverse textual corpora, they were shown to improve target models\u2019 performance on a number of Natural Language Processing tasks. Although highly bene\ufb01cial, those models\u2019 use may not be suf\ufb01cient for the case of dialogue as response generation for goal-oriented dialogue from extremely limited data requires specialized tools. General-purpose embeddings lack speci\ufb01city for close dialogue domains since they have been learned from very heterogeneously distributed data: in dialogue, the distribution of word sequences is highly speci\ufb01c to a given domain or task, i.e. word sequences in dialogue can take on an astonishingly wide variety of meanings in dif\fferent contexts. In this paper, we will work with autoencoders, a class of unsupervised text representation models working via reconstructing the input \u2014 specifically, a Variational Autoencoder (VAE) was considered the main means to learn robust text representations (Bowman et al., 2016). However, the model itself was challenging to train and was mainly used with plenty of workarounds, and recently there started to appear variants of this model with improved stability. One such model we will use in this paper is that of Zhao et al. (2018) (see Section 4.1 for more detail). 3 Few-Shot Dialogue Generation We \ufb01rst describe the task we are addressing in this paper, and the corresponding base model. Speci\ufb01cally, we have a set of dialogues in source domains and just a few seed dialogues in the target domain. The main task of the model is: having been trained on all the available source data, to \ufb01ne-tune on the target data to be further evaluated on the full set of target-domain dialogues. We are basing our model for this task on a Hierarchical Encoder-Decoder (HRED) architecture with attention-based copying (Merity et al., 2017). The base optimization objective is as follows: LHRED = log pFd(xsys | Fe(c, xusr)) (1) where xusr is user\u2019s query, xsys is the system\u2019s response, c is the dialogue context, and Fe and Fd are respectively hierarchical encoder and decoder. We work with goal-oriented dialogues, so it is natural in our setting to take into account an underlying Knowledge Base (or API) providing results for the user\u2019s queries. Given that such KB information may contain unseen token sequences for the most part, especially in the target domain, we use a copy mechanism in order to be able to use this information in the system\u2019s responses. More speci\ufb01cally, we represent KB info as token sequences and concatenate it to the dialogue context similarly to CopyNet setup of Eric et al. (2017). Our copy mechanism\u2019s implementation is the Pointer-Sentinel Mixture Model (Merity et al., 2017; Zhao and Esk\u00b4 enazi, 2018): p(wt | st) = gpvocab(wt | st) + (1 \u2212g)pptr(wt | st) (2) In the formula above, wt and st are respectively the output word and the decoder state at step t; pptr is the probability of attention-based copying of the word wt, and g is the mixture weight: pptr(wt | st) = X kj\u2208I(w,x) \u03b1kj,t (3) g = Softmax(uT tanh (W\u03b1si)) (4) where \u03b1kj,t is the attention weight for kth token in \ufb02attened dialogue context at the decoding step t and u is the sentinel vector \u2014 for more detail, see (Zhao and Esk\u00b4 enazi, 2018). 4 Dialogue Knowledge Transfer Network Transfer learning is considered the key means for ef\ufb01cient training with minimal data, and our DiKTNet model essentially introduces several knowledge-transfer augmentations to the base HRED model described above. DiKTNet training is performed in two stages described below. 4.1 Stage 1. Dialogue representation pre-training Dialogue structure \u2014 e.g. word sequences \u2014 is highly speci\ufb01c to a given domain or task, and the meaning of conversational utterances is highly contextual, i.e. similar utterances may have different meanings depending on the context. Nevertheless, there is a lot of similarity in dialogue structure \u2014 i.e. sequences of dialogue actions \u2014 across domains, e.g. a conversation normally starts with a mutual greeting and a question is very often followed by an answer. Here, we propose to exploit this phenomenon in the form of learning a latent dialogue action representation in order to better capture the dialogue structure by abstracting away from surface forms. Crucially, we learn such representation from MetaLWOz (Lee et al., 2019), a dataset speci\ufb01cally created for the purposes of meta-learning and transfer learning and consisting of human-human conversations in 51 unique domains (for more detail, see Section 6). For this stage of training we use unsupervised, variational autoencoder-based (VAE) representation learning following the Latent Action EncoderDecoder (LAED) approach of Zhao et al. (2018). LAED\u2019s underlying model is called Discrete Information VAE (DI-VAE), a variant of a VAE with \fFigure 2: DiKTNet Stage 2 (tokens in brackets are KB data) two modi\ufb01cations. Firstly, its optimization objective accounts for the mutual information I between the input and the latent variable which is implicitly discouraged in the original VAE objective (see Eqs. 5 and 6). LV AE = Ex[EqR(z|x)[log pG(x | z)] \u2212KL(q(z)\u2225p(z))] = Eq(z|x)p(x)[log pG(x | z)] \u2212I(Z, X) \u2212KL(q(z)\u2225p(z)), (5) LDI-V AE = LV AE + I(Z, X) = EqR(z|x)p(x)[log pG(x | z)] \u2212KL(q(z)\u2225p(z)) (6) where x is the input utterance, z is the latent variable (X and Z corresponding to their batchwise vectors), R and G are the recognition and generation models (implemented as RNNs) respectively, and q(z) = Ex[qR(z | x)]. Secondly, the latent variable z in DI-VAE is discrete as opposed to the continuous one in a vanilla VAE. The discrete latent code lends itself well to interpretation and can be viewed as a form of unsupervised dialogue act tagging. The discrete nature also makes the calculation of the KL-term more tractable via the Batch Prior Regularization technique (Zhao et al., 2018): KL(q\u2032(z)\u2225p(z)) = K X k=1 q\u2032(z = k) log q\u2032(z = k) p(z = k) (7) where K is the number of z\u2019s possible values and q\u2032(z) is the approximation to q(z) over N data points: q\u2032(z) = 1 N N X n=1 qR(z | xn) (8) In addition, we employ DI-VST, DI-VAE\u2019s counterpart working in a Variational Skip-Thought manner (Hill et al., 2016) and reconstructing the input x\u2019s previous (xp) and next (xn) context utterances instead: LDI-V ST = EqR(z|x)p(x)[log pn G(xn | z)pp G(xp | z)] \u2212KL(q(z)\u2225p(z)) (9) DI-VAE and DI-VST models are visualized in Figure 1. In the downstream DiKTNet model, we use DIVAE autoencoder in order to obtain the representation of the user\u2019s query: zusr = DI-VAE(xusr). In turn, DI-VST is used to obtain a prediction of the system\u2019s action zsys in the discretized latent form given the user\u2019s input xusr as well as the full dialogue context c. For that, DI-VST autoencoder is used as part of a hierarchical, contextaware encoder-decoder response generation model (we refer to it as LAED itself). Its optimization \fobjective is as follows: LLAED(\u03b8F, \u03b8\u03c0) = EqR(zsys|xsys)p(xsys,c)[ log p\u03c0(zsys | c) + log pF(xsys | zsys, c)] (10) where \u03b8F is the set of parameters of the contextaware encoder and decoder, \u03b8\u03c0 is the set of parameters of the policy \u03b8\u03c0. \u03b8\u03c0 is the component trained to directly predict zsys from the context c. We use different models for different aspects of the dialogue: DI-VAE for the user utterance representation, and DI-VST-based LAED for the system\u2019s action prediction. In that, we follow the intuition of Zhao and Esk\u00b4 enazi (2018) who argued that DI-VAE is better at capturing speci\ufb01c words of an utterance, while DI-VST represents the overall dialogue action better. We train these two models on MetaLWOz in an unsupervised way with the objectives as described above, and use their discretized latent codes zusr and zsys respectively in the downstream model at the next stage of training. 4.2 Stage 2. Transfer At this stage, we train directly for our target task, few-shot dialogue generation, and thus go back to the model described in Section 3. While the training procedure of this model naturally assumes domain transfer, we will provide it with more sources of textual and dialogue knowledge of varying generality described below. As opposed to direct domain transfer, we incorporate domain-general dialogue understanding from the LAED representation trained on MetaLWOz at the previous stage. LAED captures the background top-down dialogue structure: sequences of dialogue acts in a cooperative conversation, latent dialog act-induced clustering of utterances, and the overall phrase structure of spoken utterances. We incorporate this information into the model by conditioning HRED\u2019s decoder on the combined latent codes from Stage 1 and refer to this model as HRED+LAED. LHRED+LAED = Ep(xusr,c)p(zusr|xusr)p\u03c0(zsys|xusr,c)[ log pFd(xsys | \b Fe(xusr, c), zusr, zsys \t )]. (11) where zusr and zsys are respectively samples obtained from the DI-VAE user utterance model and LAED/DI-VST system action model, and {} is the concatenation operator. The last, most general source of knowledge we use is a pre-trained ELMo model (Peters et al., 2018). Apart from using an underlying bidirectional RNN encoder, ELMo captures both token-level and character-level information which is especially crucial in understanding unseen tokens and KB items in the underrepresented target domain. The HRED model with ELMo as the utterance-level encoder is referred to as HRED+ELMo. Finally, DiKTNet is the HRED augmented with both ELMo encoder and LAED representation. DiKTNet is visualized in Figure 2. The model (as well as its variants listed above) is implemented in PyTorch (Paszke et al., 2017), and the code is openly available1. 5 Baselines We perform an exhaustive ablation study of DiKTNet by comparing it to all of its variations mentioned above: HRED, HRED+ELMo, and HRED+LAED. In addition to that, we have the HRED+VAE \u2014a version of HRED+LAED for which we use a regular, continuous VAE behind DI-VAE and DI-VST in order to determine the impact of discretized latent codes (see Eq 5 for the corresponding objective function). Furthermore, we compare DiKTNet to the previous state-of-the-art approach, Zero-Shot Dialogue Generation (Zhao and Esk\u00b4 enazi, 2018). This model didn\u2019t use any complete in-domain dialogues but instead it relied on annotated utterances in all of the domains. We use it as-is (ZSDG), as well its variation as follows. We make use of its central idea of \u2018domain descriptions\u2019 bridging dialogue understanding across domains, but instead of using manually annotated utterances, we employ automatic Natural Language Understanding markup. Our NLU annotations include: \u2022 Named Entity Recognition \u2014 Stanford NER model ensemble of case-sensitive and caseless models (Finkel et al., 2005), \u2022 Date/time markup \u2014 Stanford SUTime (Chang and Manning, 2012), \u2022 Wikidata entity linking \u2014 Yahoo FEL (Blanco et al., 2015; Pappu et al., 2017). 1https://bit.ly/fsdg_emnlp2019 \fModel Domain Navigation Weather Schedule BLEU, % Entity F1, % BLEU, % Entity F1, % BLEU, % Entity F1, % ZSDG 5.9 14.0 8.1 31 7.9 36.9 NLU ZSDG 6.1 \u00b1 2.2 12.7 \u00b1 3.3 5.0 \u00b1 1.6 16.8 \u00b1 6.7 6.0 \u00b1 1.7 26.5 \u00b1 5.4 NLU ZSDG+LAED 7.9 \u00b1 1 12.3 \u00b1 2.9 8.7 \u00b1 0.6 21.5 \u00b1 6.2 8.3 \u00b1 1 20.7 \u00b1 4.8 HRED@1% 6.0 \u00b1 1.8 9.8 \u00b1 4.8 6.9 \u00b1 1.1 22.2 \u00b1 10.7 5.5 \u00b1 0.8 25.6 \u00b1 8.2 HRED@3% 7.9 \u00b1 0.7 11.8 \u00b1 4.4 9.6 \u00b1 1.8 39.8 \u00b1 7 8.2 \u00b1 1.1 34.8 \u00b1 4.4 HRED@5% 8.3 \u00b1 1.3 15.3 \u00b1 6.3 11.5 \u00b1 1.6 38.0 \u00b1 10.5 9.7 \u00b1 1.4 37.6 \u00b1 8.0 HRED@10% 9.8 \u00b1 0.8 19.2 \u00b1 3.2 12.9 \u00b1 2.4 40.4 \u00b1 11.0 12.0 \u00b1 1.0 38.2 \u00b1 4.2 HRED+VAE@1% 3.6 \u00b1 2.6 9.3 \u00b1 4.1 6.8 \u00b1 1.3 23.2 \u00b1 10.1 4.6 \u00b1 1.6 28.9 \u00b1 7.3 HRED+VAE@3% 6.9 \u00b1 1.9 15.6 \u00b1 5.8 9.5 \u00b1 2.6 32.2 \u00b1 11.8 6.6 \u00b1 1.7 34.8 \u00b1 7.7 HRED+VAE@5% 7.8 \u00b1 1.9 12.7 \u00b1 4.2 10.1 \u00b1 2.1 40.3 \u00b1 10.4 8.2 \u00b1 1.7 34.2 \u00b1 8.7 HRED+VAE@10% 9.0 \u00b1 2.0 18.0 \u00b1 5.8 12.9 \u00b1 2.2 40.1 \u00b1 7.6 11.6 \u00b1 1.5 39.9 \u00b1 6.9 HRED+LAED@1% 7.1 \u00b1 0.8 10.1 \u00b1 4.5 10.6 \u00b1 2.1 31.4 \u00b1 8.1 7.4 \u00b1 1.2 29.1 \u00b1 6.6 HRED+LAED@3% 9.2 \u00b1 0.8 14.5 \u00b1 4.8 13.1 \u00b1 1.7 40.8 \u00b1 6.1 9.2 \u00b1 1.2 32.7 \u00b1 6.1 HRED+LAED@5% 10.3 \u00b1 1.2 15.6 \u00b1 4.5 14.5 \u00b1 2.2 40.9 \u00b1 8.6 11.8 \u00b1 1.9 37.6 \u00b1 6.1 HRED+LAED@10% 12.3 \u00b1 0.9 17.3 \u00b1 4.5 17.6 \u00b1 1.9 47.5 \u00b1 6.0 15.2 \u00b1 1.6 38.7 \u00b1 8.4 HRED+ELMo@1% 5.8 \u00b1 1.9 18.2 \u00b1 3.8\u22c6 7.3 \u00b1 2.6 38.5 \u00b1 11.1 6.3 \u00b1 2.6 36.3 \u00b1 9.2 HRED+ELMo@3% 8.0 \u00b1 1.3 17.2 \u00b1 4.2 10.6 \u00b1 1.1 42.0 \u00b1 11.0 9.5 \u00b1 2.0 39.6 \u00b1 9.2 HRED+ELMo@5% 9.4 \u00b1 0.8 21.5 \u00b1 7.3 12.1 \u00b1 2.0 39.0 \u00b1 12.8 11.3 \u00b1 2.1 40.0 \u00b1 5.6 HRED+ELMo@10% 9.9 \u00b1 1.1 24.3 \u00b1 5.7 14.9 \u00b1 2.7 41.4 \u00b1 12.0 14.5 \u00b1 1.4 43.4 \u00b1 3.9 DiKTNet@1% 8.4 \u00b1 0.7\u2217 15.2 \u00b1 4.0 11.5 \u00b1 1.7\u2217 43.0 \u00b1 10.5\u2217 8.1 \u00b1 0.8\u2217 40.5 \u00b1 6.3\u2217 DiKTNet@3% 10.4 \u00b1 1.2 19.2 \u00b1 4.8 15.7 \u00b1 2.1 44.0 \u00b1 11.7 11.1 \u00b1 1.3 38.2 \u00b1 5.8 DiKTNet@5% 11.5 \u00b1 1.1 23.9 \u00b1 2.9 15.5 \u00b1 2.1 39.5 \u00b1 14.8 13.7 \u00b1 2.0 41.1 \u00b1 3.8 DiKTNet@10% 12.9 \u00b1 1.0 26.8 \u00b1 4.2 20.4 \u00b1 1.2 48.0 \u00b1 5.6 17.5 \u00b1 1.3 42.8 \u00b1 2.6 Table 1: Evaluation results. Marked with asterisks are individual results higher than ZSDG\u2019s performance and which are achieved with the minimum amount of training data. In bold is the model consistently outperforming ZSDG in all domains and metrics with minimum data. We serialize annotations from these sources into token sequences and make domain description tuples out of all the utterances in the source and target domains. In this way, most of our domain descriptions share the structure and content of the original ones. For example, for the phrase \u2018Will it be cloudy in Los Angeles on Thursday?\u2019, the original ZSDG annotation is of the form \"request #goal cloudy #location Los Angeles #date Thursday\". Our NLU annotation for this phrase is \"LOCATION Los Angeles DATE Thursday\". We have two models in this setup, with (NLU ZSDG+LAED) and without the use of LAED representation (NLU ZSDG) respectively. 6 Datasets Number of Domains: 51 Number of Dialogues: 40,388 Mean dialogue length: 11.91 Table 2: MetaLWOz dataset statistics We use the Stanford Multi-Domain (SMD) dialogue dataset (Eric et al., 2017) containing human-human goal-oriented dialogues in three doStatistic Domain Navigation Weather Schedule Dialogues 800 797 828 Utterances 5248 4314 3170 Avg. dialogue length 6.56 5.41 3.83 Table 3: Stanford multi-domain dataset statistics (trainset) mains: appointment scheduling, city navigation, and weather information. Each dialogue has to do with a single task queried by the user and thus comes with additional knowledge base information coming from implicit querying of the underlying domain-speci\ufb01c API. Although sharing some common features (the setting of an intelligent in-car assistant and the use of the underlying KB), the dialogues differ signi\ufb01cantly across domains which makes domain transfer suf\ufb01ciently challenging. For the latent representation learning, we use MetaLWOz, a goal-oriented dialogue dataset containing human-human dialogues in diverse domains and several tasks in each of those. The dialogues are collected in a Wizard-of-Oz method where human participants were given a problem domain and a speci\ufb01c task in it, and were asked to complete the task via dialogue. No domainspeci\ufb01c APIs or knowledge bases were available \ffor the participants, and in the actual dialogues they were free to use \ufb01ctional names and entities in a consistent way. The dataset\u2019s statistics are shown in Table 2. All domains available in the MetaLWOz dataset are listed in Table 6 of Appendix A. 7 Experimental setup and evaluation Our few-shot setup is as follows. Given the target domain, we \ufb01rst train LAED model(s) on the MetaLWOz data \u2014 and here we exclude from training every domain that might overlap with the target one. Speci\ufb01cally, for the Navigation domain in SMD, it\u2019s Store Details, for Weather it\u2019s Weather Check, and for Schedule it\u2019s Update Calendar and Appointment Reminder. In our \ufb01nal setup, at Stage 1 we used a DI-VSTbased LAED and a DI-VAE, both of size 10 \u00d7 5. Next, having trained and frozen Stage 1 models, we train DiKTNet on all the source domains from the SMD dataset. We use a random sample of the target domain utterances together with their contexts and KB info, varying the amount of those from 1% to 10% of all available target data. For the NLU ZSDG setup, we annotated all available SMD data and randomly selected a subset of 1000 utterances from each source domain, and 200 utterances from the target domain. For source domains, this number amounts to roughly a quarter of all available training data \u2014 we chose it in order to make use of as much annotated data as possible while keeping the domain description task secondary. For the target domain, we made sure to keep under roughly the same in-domain data requirements as the ZSDG baseline. For evaluation, we follow the approach of Zhao and Esk\u00b4 enazi (2018) and report BLEU and Entity F1 scores. Given the non-deterministic nature of our training setup, we report means and variances of our results over 10 runs with different random seeds. We also perform an additional evaluation of DiKTNet\u2019s performance with extended amounts of target data and compare it to the original KeyValue Retrieval Network (KVRet) by Eric et al. (2017) which was originally trained with all the available data. In this case we average BLEU scores across all 3 SMD domains in order to be consistent with the form that the corresponding results are presented in the original paper. We train our models with the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001. Our hierarchical models\u2019 utterance encoder is an LSTM cell (Hochreiter and Schmidhuber, 1997) of size 256, and the dialog-level encoder is a GRU (Cho et al., 2014) of size 512. 8 Results and discussion Our results are shown in Table 1 \u2014 our objective here is maximum accuracy with minimum training data required. 8.1 Results for the few-shot setup It can be seen that few-shot models with LAED representation are the best performing models for this objective. While improvements upon ZSDG can already be seen with simple HRED in a fewshot setup, the use of the LAED representation and domain-general ELMo encoding helps significantly reduce the amount of in-domain training data needed: at 1% of in-domain dialogues, we see that DiKTNet consistently and signi\ufb01cantly improves upon ZSDG in every domain. In SMD, with its average dialogue length of 5.25 turns, 1% of training dialogues amounts to approximately 40 in-domain training utterances. In contrast, the ZSDG setup used approximately 150 training utterance-annotation pairs for each domain, including the target one, totalling about 450 annotated utterances. Although in our few-shot approach we use full in-domain dialogues, we end up having signi\ufb01cantly less in-domain training data, with the crucial difference that none of those has to be annotated for our approach. Therefore, the method we introduced attains state-of-the-art in both accuracy and data-ef\ufb01ciency. In turn, the results of the ZSDG NLU setup demonstrate that single utterance annotations, if not domain-speci\ufb01c and produced by human experts, don\u2019t provide as much signal as full dialogues, even without annotations at all. Even the signi\ufb01cant number of such annotated utterances per domain didn\u2019t make a difference in this case. We would also like to point out that, as can be seen in the table, our results have quite high variance \u2014 the main source of it is the nature of our training/evaluation setup where we average over 10 runs with 10 different sets of seed dialogues. However, in the majority of cases with comparable means, DiKTNet has a lower variance than the alternative models at the same percentage of seed data. And in the extreme case with 1% target \fDomain Context Gold response Predicted response schedule Remind me to take my pills Ok setting your medicine Okay, setting a reminder to take What time do you need appointment for 7pm your pills at 7 pm. to take your pills? I need to take my pills at 7 pm. navigate Find the address to a hospital Have a good day No problem. Stanford Express Care is at 214 El Camino Real. Thank you. weather What is the weather forecast For what city would you For what city would you like for the weekend? like to know that? the weekend forecast for? Table 4: DiKTNet\u2019s selected responses Where can I go shopping? Where does my friend live? Where can I get Chinese food? Where can I go to eat? Can you please take me to a coffee house? I\u2019d like to set a reminder for my meeting at 2pm later this month please. What is the time and agenda for my meeting, and who is attending? Schedule a lab appointment with my aunt for the 7th at 1pm. Schedule a calendar reminder for yoga with Jeff at 6pm on the 5th. Car I\u2019m desiring to do some shopping: which one is it the nearest shopping ... ... center? Anything within 4 miles? Get the address to my friend\u2019s house that i could get to the fastest Car I need to get to my friends house, it should be within 4 miles from here Table 5: Selected clusters of utterances sharing the same LAED codes data, DiKTNet improves on all the other models in terms of both means and variances. 8.2 Discussion of the latent representations The comparison of the setups with different latent representations also gives us some insight: while the VAE-powered HRED model improves on the baseline in multiple cases, it lacks generalization potential compared to the LAED setup. The reason for that might be the inherently more stable training of LAED due to its modi\ufb01ed objective function, which in turn results in a more informative representation providing better generalization. In order to have a glimpse into the LAEDproduced clustering, in Table 5 we present a snippet of the utterance clusters sharing the same, most frequent latent codes throughout the dataset (the clustering is obtained with LAED model trained on every domain but \u2018Store details\u2019, i.e. the one for the evaluation on \u2018Navigate\u2019 SMD domain). From this snippet, it can be seen that those clusters work well for domain separation, as well as capturing dialogue intents. 8.3 Results with extended data We performed an additional experiment with extended target data (see Figure 3 of Appendix A). It showed that DiKTNet, when trained with as little as 5% of target data, can outperform a KVRet trained using the entire dataset. Furthermore, with 50% of the target data, DiKTNet becomes more than twice as good as KVRet in terms of overall language generation. However, goal-oriented metrics such as Entity F1 are more challenging to bootstrap. As such, DiKTNet outperforms KVRet on \u2018Weather\u2019 domain starting at 10% of the target data, but only has a trend on narrowing down the performance gap with KVRet on \u2018Navigate\u2019, and certainly needs more training data in the \u2018Schedule\u2019 domain. The explanation for that might be that most of the dialogue entities come from the KB snippets which are the least represented resource in our setup. They aren\u2019t available in MetaLWOz, and in SMD, KB snippets share little in common across domains. Therefore, in order to increase Entity F1, KB information should be directly copied to the output more ef\ufb01ciently \u2014 and increasing the robustness of the copy-augmented decoder is one of our future research directions. \f8.4 Discussion of the evaluation metrics We use BLEU as one of the main evaluation metrics in this paper \u2014 we do it in order to fully conform with the setup of Zhao and Esk\u00b4 enazi (2018) which we base our work on. But while being widely adopted as a general-purpose language generation metric, BLEU might not be suf\ufb01cient in the dialogue setting (see Novikova et al. (2017) for a review). Speci\ufb01cally, we have observed several cases where the model would produce an overall grammatical response with the correct dialogue intent (e.g. \u201cYou are welcome! Anything else?\u201d), but BLEU would output a lower score for it due to word mismatch (e.g. \u201cYou\u2019re welcome!\u201d; see more examples in Table 4). This is a general issue in dialogue model evaluation since the variability of possible responses equivalent in meaning is very high in dialogue. In future work, we will put more emphasis on the meaning of utterances, for example by incorporating external dialogue act tagging resources in the evaluation setup which, together with general language generation metrics like perplexity, can make for more robust evaluation criteria than word overlap. 9 Conclusion and future work In this paper, we have introduced DiKTNet, a model achieving state-of-the-art dialogue generation performance in a few-shot setup, without using any annotated data. By transferring latent dialogue knowledge from multiple sources of varying generality, we obtained a model with superior generalization to an underrepresented domain. Speci\ufb01cally, we showed that our few-shot approach achieves state-of-the art results on the Stanford Multi-Domain dataset while being more dataef\ufb01cient than the previous best model, by requiring signi\ufb01cantly less data none of which has to be annotated. While being state-of-the-art, the accuracy scores themselves still suggest that our technique is not ready for immediate adoption for real-world production purposes, and the task of few-shot generalization to a completely new dialogue domain remains an area of active research. In our own future work, we will try and \ufb01nd ways to improve the unsupervised representation (Shi et al., 2019) in order to increase the transfer potential. We will also explore ways to enable more ef\ufb01cient copying from the input which is crucial for correctly handling entities and therefore attaining high goaloriented performance of the system. Apart from that, we will consider alternative evaluation criteria to account for rich surface variability of natural speech."
+ },
+ {
+ "url": "http://arxiv.org/abs/1908.05854v1",
+ "title": "Few-Shot Dialogue Generation Without Annotated Data: A Transfer Learning Approach",
+ "abstract": "Learning with minimal data is one of the key challenges in the development of\npractical, production-ready goal-oriented dialogue systems. In a real-world\nenterprise setting where dialogue systems are developed rapidly and are\nexpected to work robustly for an ever-growing variety of domains, products, and\nscenarios, efficient learning from a limited number of examples becomes\nindispensable.\n In this paper, we introduce a technique to achieve state-of-the-art dialogue\ngeneration performance in a few-shot setup, without using any annotated data.\nWe do this by leveraging background knowledge from a larger, more highly\nrepresented dialogue source --- namely, the MetaLWOz dataset. We evaluate our\nmodel on the Stanford Multi-Domain Dialogue Dataset, consisting of human-human\ngoal-oriented dialogues in in-car navigation, appointment scheduling, and\nweather information domains.\n We show that our few-shot approach achieves state-of-the art results on that\ndataset by consistently outperforming the previous best model in terms of BLEU\nand Entity F1 scores, while being more data-efficient by not requiring any data\nannotation.",
+ "authors": "Igor Shalyminov, Sungjin Lee, Arash Eshghi, Oliver Lemon",
+ "published": "2019-08-16",
+ "updated": "2019-08-16",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "I.2.7"
+ ],
+ "main_content": "Introduction Data-driven dialogue systems are becoming widely adopted in enterprise environments. One of the key properties of a dialogue model in this setting is its data ef\ufb01ciency, i.e. whether it can attain high accuracy and good generalization properties when only trained from minimal data. Recent deep learning-based approaches to training dialogue systems (Ultes et al., 2018; Wen et al., 2017) put emphasis on collecting large amounts of data in order to account for numerous variations in the user inputs and to cover as many dialogue trajectories as possible. However, in realworld production environments there isn\u2019t enough domain-speci\ufb01c data easily available throughout the development process. In addition, it\u2019s important to be able to rapidly adjust a system\u2019s behavior according to updates in requirements and new product features in the domain. Therefore, dataef\ufb01cient training is a priority direction in dialogue system research. In this paper, we build on a technique to train a dialogue model for a new domain in a \u2018zeroshot\u2019 setup (in terms of full dialogues in the target domain) only using annotated \u2018seed\u2019 utterances (Zhao and Esk\u00b4 enazi, 2018). We present an alternative, \u2018few-shot\u2019 approach to data-ef\ufb01cient dialogue system training: we do use complete in-domain dialogues while using approximately the same amount of training data as Zhao and Esk\u00b4 enazi (2018), with respect to utterances. However, in our method, no annotation is required \u2014 we instead use a latent dialogue act annotation learned in an unsupervised way from a larger (multi-domain) data source, broadly following the model of Zhao et al. (2018). This approach is potentially more attractive for practical purposes because it is easier to collect unannotated dialogues than collecting utterances across various domains under a consistent annotation scheme. 2 Related Work There is a substantial amount of work on learning dialogue with minimal data \u2014 starting with the Dialog State Tracking Challenge 3 (Henderson et al., 2014) where the problem was to adjust a pre-trained state tracker to a different domain using a seed dataset. In dialogue response generation, there has also been work on bootstrapping a goal-oriented dialogue system from a few examples using a linarXiv:1908.05854v1 [cs.CL] 16 Aug 2019 \fguistically informed model: (Eshghi et al., 2017) used an incremental semantic parser \u2013 DyLan (Eshghi et al., 2011; Eshghi, 2015) \u2013 to obtain contextual meaning representations, and based the dialogue state on this (Kalatzis et al., 2016). Incremental response generation was learned using Reinforcement Learning, again using the parser to incrementally process the agent\u2019s output and thus prune ungrammatical paths for the learner. Compared to a neural model \u2014 End-to-End Memory Network (Sukhbaatar et al., 2015), this linguistically informed model was superior in a 1-shot setting (Shalyminov et al., 2017). At the same time, its main linguistic resource \u2014 a domain-general dialogue grammar for English \u2014 makes the model in\ufb02exible unless wide coverage is achieved. Transfer learning for Natural Language Processing is strongly motivated by recent advances in vision. When training a convolutional neural network (CNN) on a small dataset for a speci\ufb01c problem domain, it often helps to learn low-level convolutional features from a greater, more diverse dataset. For numerous applications in vision, ImageNet (Deng et al., 2009) became the source dataset for pre-training convolutional models. For NLP, the main means for transfer were Word2Vec word embeddings (Mikolov et al., 2013) which have recently been updated to models capturing contexts as well (Peters et al., 2018; Devlin et al., 2018). While these tools are widely known to improve performance in various tasks, more specialized models could as well be created for speci\ufb01c research areas, e.g. dialogue generation in our case. The models above are some of the approaches to one of the central issues of ef\ufb01cient knowledge transfer \u2014 learning a uni\ufb01ed data representation generalizable across datasets, dubbed \u2018representation learning\u2019. In our approach, we will use one such technique based on variational autoencoding with discrete latent variables (Zhao et al., 2018). In this paper we present an approach to transfer learning which is more tailored \u2014 both modelwise and dataset-wise \u2014 to goal-oriented dialogue in underrepresented domains. 3 The approach 3.1 Zero-shot theoretical framework We \ufb01rst describe the original Zero-Shot Dialogue Generation (ZSDG) theoretical framework of (Zhao and Esk\u00b4 enazi, 2018) which we base our work on. For ZSDG, there is a set of source dialogue domains and one target domain, with the task of training a dialogue response generation model from all the available source data and a signi\ufb01cantly reduced subset of the target data (referred to as seed data). The trained system\u2019s performance is evaluated exclusively on the target domain. More speci\ufb01cally, the data in ZSDG is organized as follows. There are unannotated dialogues in the form of {c, x, d}src/tgt \u2014 tuples of dialogue contexts, responses, and domain names respectively for each of the source and target domains. There are also domain descriptions in the form of {x, a, d}src/tgt \u2014 tuples of utterances, slot-value annotations, and domain names respectively for source and target domains. ZSDG is essentially a hierarchical encoderdecoder model which is trained in a multi-task fashion by receiving two types of data: (1) dialogue batches drawn from all the available sourcedomain data, and (2) seed data batches, a limited number of which are drawn from domain description data for all of the source and target domains. ZSDG model optimizes for 2 objectives. With dialogue batches, the model maximizes the probability of generating a response given the context: Ldialog = \u2212log pFd(x | Fe(c, d)) + \u03bbD(R(x, d)\u2225Fe(c, d)) (1) where Fe and Fd are respectively the encoding and decoding components of a hierarchical generative model; R is the shared recurrent utterance encoder (the recognition model); and D is a distance function (L2 norm). In turn, with domain description batches, the model maximizes the probability of generating the utterance given its slot-value annotation, both represented as sequences of tokens: Ldd = \u2212log pFd(x | R(a, d)) + \u03bbD(R(x, d)\u2225R(a, d)) (2) In this multi-task setup, the latent space of R is shared between both utterances and domain descriptions across all the domains. Moreover, the distance-based loss terms make sure that (a) utterances with similar annotations are closer together in the latent space (Eq. 2), and (b) utterances are closer to their dialogue contexts (Eq. 1) so that their encodings capture some of the contexts\u2019 \f(a) LAED pre-training (b) Zero/few-shot dialogue generation Figure 1: Model architecture. At the pre-training stage (1a), we train the discretized LAED dialogue representation on the Transfer dataset. We then train a zero/few-shot dialogue generation model on SMD with this representation incorporated (1b). meaning. These properties of the model make it possible to achieve better cross-domain generalization. 3.2 Unsupervised representation learning As was the case with ZSDG, robust representation learning helps achieve better generalization across domains. The most widely-adopted way to train better representations has been to leverage a greater data source. In this work, we consider unsupervised, variational autoencoder-based (VAE) representation learning on a large dataset of unannotated dialogues. The speci\ufb01c approach we refer to is the Latent Action Encoder-Decoder (LAED) model of (Zhao et al., 2018). LAED is a variant of VAE with two modi\ufb01cations: (1) an optimization objective augmented with mutual information between the input and the latent variable for better and more stable learning performance, and (2) discretized latent variable for the interpretability of the resulting latent actions. Just as in ZSDG, LAED is a hierarchical encoder-decoder model with the key component being a discreteinformation (DI) utterance-level VAE. Two versions of this model are introduced, with respective optimization objectives: LDI-V AE =EqR(z|x)p(x)[log pG(x | z)] \u2212KL(q(z)\u2225p(z)) (3) LDI-V ST = EqR(z|x)p(x)[log pn G(xn | z)pp G(xp | z)] \u2212KL(q(z)\u2225p(z)) (4) where R and G are recognition and generation components respectively, x is the model\u2019s input, z is the latent variable, and p(z) and q(z) are respectively prior and posterior distributions of z. DI-VAE works in a standard VAE fashion reconstructing the input x itself, while DI-VST follows the idea of Variational Skip-Thought reconstructing the input\u2019s previous and next contexts: {xn, xp}. As reported by the authors, the two models capture different aspects of utterances, i.e. DI-VAE reconstructs speci\ufb01c words within an utterance, whereas DI-VST captures the overall intent better \u2014 see the visualization in Figure 1a. 3.3 Proposed models1 In our approach, we simplify the ZSDG setup by not using any explicit domain descriptions, therefore we only work with \u2018dialogue\u2019 batches. We also make use of Knowledge Base information without loss of generality (see Section 5) \u2014 thus we work with data of the form {c, x, k, d} where k is the KB information. We refer to this model as Few-Shot Dialogue Generation, or FSDG. For learning a reusable dialogue representation, we use an external multi-domain dialogue dataset, the Transfer dataset (see Section 4). We perform a preliminary training stage on it where we train 2 LAED models, both DI-VAE and DI-VST. Then, at the main training stage, we use the hierarchical encoders of both models and incorporate them with FSDG\u2019s decoder. Thus, we have the following encoding function (which is 1Code is available at https://bit.ly/fsdg_ sigdial2019 \fModel Domain Navigation Weather Schedule BLEU, % Entity F1, % BLEU, % Entity F1, % BLEU, % Entity F1, % ZSDG 5.9 14.0 8.1 31 7.9 36.9 NLU ZSDG 6.1 \u00b1 2.2 12.7 \u00b1 3.3 5.0 \u00b1 1.6 16.8 \u00b1 6.7 6.0 \u00b1 1.7 26.5 \u00b1 5.4 NLU ZSDG+LAED 7.9 \u00b1 1 12.3 \u00b1 2.9 8.7 \u00b1 0.6 21.5 \u00b1 6.2 8.3 \u00b1 1 20.7 \u00b1 4.8 FSDG@1% 6.0 \u00b1 1.8 9.8 \u00b1 4.8 6.9 \u00b1 1.1 22.2 \u00b1 10.7 5.5 \u00b1 0.8 25.6 \u00b1 8.2 FSDG@3% 7.9 \u00b1 0.7 11.8 \u00b1 4.4 9.6 \u00b1 1.8 39.8 \u00b1 7 8.2 \u00b1 1.1 34.8 \u00b1 4.4 FSDG@5% 8.3 \u00b1 1.3 15.3 \u00b1 6.3 11.5 \u00b1 1.6 38.0 \u00b1 10.5 9.7 \u00b1 1.4 37.6 \u00b1 8.0 FSDG@10% 9.8 \u00b1 0.8 19.2 \u00b1 3.2 12.9 \u00b1 2.4 40.4 \u00b1 11.0 12.0 \u00b1 1.0 38.2 \u00b1 4.2 FSDG+VAE@1% 3.6 \u00b1 2.6 9.3 \u00b1 4.1 6.8 \u00b1 1.3 23.2 \u00b1 10.1 4.6 \u00b1 1.6 28.9 \u00b1 7.3 FSDG+VAE@3% 6.9 \u00b1 1.9 15.6 \u00b1 5.8 9.5 \u00b1 2.6 32.2 \u00b1 11.8 6.6 \u00b1 1.7 34.8 \u00b1 7.7 FSDG+VAE@5% 7.8 \u00b1 1.9 12.7 \u00b1 4.2 10.1 \u00b1 2.1 40.3 \u00b1 10.4 8.2 \u00b1 1.7 34.2 \u00b1 8.7 FSDG+VAE@10% 9.0 \u00b1 2.0 18.0 \u00b1 5.8 12.9 \u00b1 2.2 40.1 \u00b1 7.6 11.6 \u00b1 1.5 39.9 \u00b1 6.9 FSDG+LAED@1% 7.1 \u00b1 0.8\u22c6 10.1 \u00b1 4.5 10.6 \u00b1 2.1\u22c6 31.4 \u00b1 8.1\u22c6 7.4 \u00b1 1.2 29.1 \u00b1 6.6 FSDG+LAED@3% 9.2 \u00b1 0.8 14.5 \u00b1 4.8\u22c6 13.1 \u00b1 1.7 40.8 \u00b1 6.1 9.2 \u00b1 1.2\u22c6 32.7 \u00b1 6.1 FSDG+LAED@5% 10.3 \u00b1 1.2 15.6 \u00b1 4.5 14.5 \u00b1 2.2 40.9 \u00b1 8.6 11.8 \u00b1 1.9 37.6 \u00b1 6.1\u2217 FSDG+LAED@10% 12.3 \u00b1 0.9 17.3 \u00b1 4.5 17.6 \u00b1 1.9 47.5 \u00b1 6.0 15.2 \u00b1 1.6 38.7 \u00b1 8.4 Table 1: Evaluation results. Marked with asterisks are individual results higher than the ZSDG baseline which are achieved with the minimum amount of training data, and in bold is the model consistently outperforming ZSDG in all domains and metrics with minimum data. then plugged in to the Eq. 1): Fe(c, k, d) = Fe DI-V AE(c, k, d) \u2295Fe DI-V ST (c, k, d) \u2295Fe FSDG(c, k, d) (5) where \u2295is the concatenation operator. We refer to this model as FSDG+LAED. We compare this LAED-augmented model to a similar one, with latent representation trained on the same data but using a regular VAE objective and thus providing regular continuous embeddings (we refer to it as FSDG+VAE). LV AE = EqR(z|x)[log pG(x | z)] \u2212KL(qR(z)\u2225p(z)) (6) Finally, in order to explore the original ZSDG setup as much as possible, we also consider its version with automatic Natural Language Understanding (NLU) markup instead of human annotations as domain descriptions. Our NLU annotations include Named Entity Recognizer (Finkel et al., 2005), a date/time extraction library (Chang and Manning, 2012), and a Wikidata entity linker (Pappu et al., 2017). We have models with (NLU ZSDG+LAED) and without LAED representation (NLU ZSDG). Our entire setup is shown in Figure 1. 4 Datasets We use the Stanford Multi-Domain (SMD) human-human goal-oriented dialogue dataset (Eric et al., 2017) in 3 domains: appointment scheduling, city navigation, and weather information. Each dialogue comes with knowledge base snippet from the underlying domain-speci\ufb01c API. For LAED training, we use MetaLWOz (Lee et al., 2019), a human-human goal-oriented dialogue corpus speci\ufb01cally designed for various meta-learning and pre-training purposes. It contains conversations in 51 domains with several tasks in each of those. The dialogues are collected using the Wizard-of-Oz method where human participants were given a problem domain and a speci\ufb01c task. No domain-speci\ufb01c APIs or knowledge bases were available for the participants, and in the actual dialogues they were free to use \ufb01ctional names and entities in a consistent way. The dataset totals more than 40, 000 dialogues, with the average length of 11.9 turns. 5 Experimental setup and evaluation Our few-shot setup is as follows. Given the target domain, we \ufb01rst train LAED models (a dialoguelevel DI-VST and an utterance-level DI-VAE, both of the size 10 \u00d7 5) on the MetaLWOz dataset \u2014 here we exclude from training every domain that might overlap with the target one. Next, using the LAED encoders, we train a Few-Shot Dialogue Generation model on all the SMD source domains. We use a random sample (1% to 10%) of the target domain utterances together with their contexts as seed data. We incorporate KB information into our model by simply serializing the records and prepending \fthem to the dialogue context, ending up with a setup similar to CopyNet in (Eric et al., 2017). For the NLU ZSDG setup, we use 1000 random seed utterances from each source domain and 200 utterances from the target domain2. For evaluation, we follow the approach of (Zhao and Esk\u00b4 enazi, 2018) and report BLEU and Entity F1 scores \u2014 means/variances over 10 runs. 6 Results and discussion Our results are shown in Table 1. Our objective here is maximum accuracy with minimum training data required, and it can be seen that fewshot models with LAED representation are the best performing models for this objective. While the improvements can already be seen with simple FSDG, the use of LAED representation helps to signi\ufb01cantly reduce the amount of in-domain training data needed: in most cases, the state-ofthe-art results are attained with as little as 3% of in-domain data. At 5%, we see that FSDG+LAED consistently improves upon all other models in every domain, either by increasing the mean accuracy or by decreasing the variation. In SMD, with its average dialogue length of 5.25 turns (see Table 4), 5% of training dialogues amounts to approximately 200 in-domain training utterances. In contrast, the ZSDG setup used approximately 150 annotated training utterances for each of the 3 domains, totalling about 450 annotated utterances. Although in our few-shot approach we use full indomain dialogues, we end up having a comparable amount of target-domain training data, with the crucial difference that none of those has to be annotated for our approach. Therefore, the method we introduced attains state-of-the-art in both accuracy and data-ef\ufb01ciency. The results of the ZSDG NLU setup demonstrate that single utterance annotations, if not domain-speci\ufb01c and produced by human experts, don\u2019t provide as much signal as raw dialogues. The comparison of the setups with different latent representations also gives us some insight: while the VAE-powered FSDG model improves on the baseline in multiple cases, it lacks generalization potential compared to LAED. The reason for that might be inherently more stable training of LAED due to its modi\ufb01ed objective function 2The numbers are selected so that the domain description task is kept secondary. which in turn results in a more informative, generalizable representation. Finally, we discuss the evaluation metrics. Since we base this paper on the work of (Zhao and Esk\u00b4 enazi, 2018), we have had to fully conform to the metrics they used to enable direct comparison. However, BLEU as the primary evaluation metric, does not necessarily re\ufb02ect NLG quality in dialogue settings \u2014 see examples in Table 2 of the Appendix (see also Novikova et al. (2017)). This is a general issue in dialogue model evaluation since the variability of possible responses equivalent in meaning is very high in dialogue. In future work, instead of using BLEU, we will put more emphasis on the meaning of utterances, for example by using external dialogue act tagging resources, using quality metrics of language generation \u2013 e.g. perplexity \u2013 as well as more taskoriented metrics like Entity F1. We expect these to make for more meaningful evaluation criteria. 7 Conclusion and future work In this paper, we have introduced a technique to achieve state-of-the-art dialogue generation performance in a few-shot setup, without using any annotated data. By leveraging larger, more highly represented dialogue sources and learning robust latent dialogue representations from them, we obtained a model with superior generalization to an underrepresented domain. Speci\ufb01cally, we showed that our few-shot approach achieves stateof-the art results on the Stanford Multi-Domain dataset while being more data-ef\ufb01cient than the previous best model, by not requiring any data annotation. Although being state-of-the-art, the accuracy scores themselves still suggest that our technique is not ready for immediate adoption for real-world production purposes, and the task of few-shot generalization to a new dialogue domain remains an area of active research. We expect that such initiatives will be fostered by the release of large dialogue corpora such as MetaLWOz. In our own future work, we will try and \ufb01nd ways to improve the unsupervised representation in order to increase the transfer potential. Adversarial learning can also be bene\ufb01cial in the setting of limited data. And apart from improving the model itself, it is necessary to consider an alternative criterion to BLEU-score for adequate evaluation of response generation."
+ },
+ {
+ "url": "http://arxiv.org/abs/1811.12148v1",
+ "title": "Improving Robustness of Neural Dialog Systems in a Data-Efficient Way with Turn Dropout",
+ "abstract": "Neural network-based dialog models often lack robustness to anomalous,\nout-of-domain (OOD) user input which leads to unexpected dialog behavior and\nthus considerably limits such models' usage in mission-critical production\nenvironments. The problem is especially relevant in the setting of dialog\nsystem bootstrapping with limited training data and no access to OOD examples.\nIn this paper, we explore the problem of robustness of such systems to\nanomalous input and the associated to it trade-off in accuracies on seen and\nunseen data. We present a new dataset for studying the robustness of dialog\nsystems to OOD input, which is bAbI Dialog Task 6 augmented with OOD content in\na controlled way. We then present turn dropout, a simple yet efficient negative\nsampling-based technique for improving robustness of neural dialog models. We\ndemonstrate its effectiveness applied to Hybrid Code Network-family models\n(HCNs) which reach state-of-the-art results on our OOD-augmented dataset as\nwell as the original one. Specifically, an HCN trained with turn dropout\nachieves state-of-the-art performance of more than 75% per-utterance accuracy\non the augmented dataset's OOD turns and 74% F1-score as an OOD detector.\nFurthermore, we introduce a Variational HCN enhanced with turn dropout which\nachieves more than 56.5% accuracy on the original bAbI Task 6 dataset, thus\noutperforming the initially reported HCN's result.",
+ "authors": "Igor Shalyminov, Sungjin Lee",
+ "published": "2018-11-29",
+ "updated": "2018-11-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "I.2.7"
+ ],
+ "main_content": "Introduction Data-driven approaches for building dialog systems have recently passed the stage of open-ended academic research and are adopted in platforms like Google Dialog\ufb02ow, Apple SiriKit, Amazon Alexa Skills Kit, and Microsoft Cognitive Services. However, most of those platforms\u2019 data-driven functionality is limited to Natural Language Understanding: user intent detection, named entity recognition, and slot \ufb01lling. A more uni\ufb01ed approach to dialog system bootstrapping \u2014 end-to-end dialog learning \u2014 is still only emerging as a commercial service, e.g. Microsoft Conversation Learner. Although still in its early age, end-to-end dialog learning from examples offers great potential: it doesn\u2019t require advanced programming skills and thus it makes it possible for a wider range of users to create dialog systems for their purposes. In turn, in the enterprise environment, end-to-end dialog learning bridges the gap between user experience designers and the actual working systems thus making product cycles and overall work\ufb02ow faster. From the technical point of view, the key issue in end-to-end training is the lack of robustness of the resulting systems. In the real-world setting of rapid dialog system prototyping, it is common to have only in-domain (IND) data for a closed target domain. This leads to a signi\ufb01cant over\ufb01tting of machine learning methods and unpredictable behavior in the cases outside of what was seen \u2217The work was done during an internship at Microsoft Research 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montr\u00e9al, Canada. arXiv:1811.12148v1 [cs.CL] 29 Nov 2018 \fFigure 1: Hybrid Code Network model family during training. For a closed-domain dialog system, it\u2019s extremely important to maintain predictable behavior on anomalous, OOD user input. In this paper, we focus on studying the effect of OOD input on end-to-end goal-oriented dialog models\u2019 performance and propose a simple and ef\ufb01cient solution to improving robustness only using IND data. Our contribution is thus two-fold: \u2022 We present a dataset for studying the effect of OOD input on dialog models. \u2022 We present turn dropout, an ef\ufb01cient negative sampling-technique for training dialog models that are capable of OOD handling using only the IND data. We show that HCN-based models enhanced with turn dropout show superior performance on OOD input, as well as surpass original HCN\u2019s result on IND-only data. 2 Related work Detection of anomalous input is a key research problem in machine learning. In the area of dialog systems, there is a series of approaches to detecting and processing of OOD input. If treated as a classi\ufb01cation problem, this problem require both IND and OOD data (Nakano et al., 2011; Tur et al., 2014). Although for the real-world scenario of end-to-end dialog system learning the task of collecting data covering potentially unbounded variety of OOD input is impractical. In contrast, there are also approaches like an in-domain veri\ufb01cation method (Lane et al., 2007) and an autoencoderbased OOD detection (Ryu et al., 2017) which do not require OOD data. However, they still have restrictions such that there must be multiple sub-domains to learn utterance representation and one must set a decision threshold for OOD detection. For a dialog system that is supposed to work in a single closed domain, these methods are not a viable solution. In contrast to those approaches, we present a simple and ef\ufb01cient technique for training dialog systems robust to OOD input in an end-to-end way, which allows the model to leverage the dialog context information to avoid the necessity of using IND data. 3 Dataset for studying robustness of dialog systems In order to study the effect of OOD input on end-to-end dialog system\u2019s performance, we used a dataset of real human-computer goal-oriented dialogs and augmented it with real user utterances from other domains in a controlled way using the open-source toolkit bAbI tools2 (Shalyminov et al., 2017). 2https://bit.ly/babi_tools 2 \f1 sys: Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you? 2 usr: I\u2019d like to get away from Monday, August 15, 2016 to Wednesday, August 31, 2016. My budget is $3200 and I would leave from Detroit. 3 sys: Sorry I didn\u2019t catch that. Could you please repeat? 4 usr: Will there be frost in Durham next week? 3 sys: Sorry I didn\u2019t catch that. Could you please repeat? 4 usr: So sorry man I want a moderately priced restaurant in the west part of town 5 sys: Table 1: Augmented dialog example (OOD content in bold, segment-level in italics) As our main dataset, we use bAbI Dialog Task 6 (Bordes et al., 2017), real human-computer conversations in the restaurant serach domain initially collected for Dialog State Tracking Challenge 2 (Henderson et al., 2014). Our OOD augmentations are as follows: \u2022 turn-level OOD: user requests from a foreign domain \u2014 the desired system behavior for such input is the fallback action, \u2022 segment-level OOD: interjections in the user in-domain requests \u2014 treated as valid user input and is supposed to be handled by the system in a regular way. These two augmentation types re\ufb02ect a speci\ufb01c dialog pattern of interest (see Table 1): \ufb01rst, the user utters a request from another domain at an arbitrary point in the dialog (each turn is augmented with the probability pood_start), and the system answers accordingly. This may go on for several turns in a row \u2014each following turn is augmented with the probability pood_cont. Eventually, the OOD sequence ends up and the dialog continues as usual, with a segment-level OOD of the user af\ufb01rming their mistake. For this study, we set pood_start to 0.2 and pood_cont to 0.43. While we introduce the OOD augmentations in a controlled programmatic way, the actual OOD content is natural. The turn-level OOD utterances are taken from dialog datasets in several foreign domains: \u2022 Frames dataset (Asri et al., 2017) \u2014 travel booking (1198 utterances), \u2022 Stanford Key-Value Retrieval Network Dataset (Eric et al., 2017) \u2014 calendar scheduling, weather information retrieval, city navigation (3030 utterances), \u2022 Dialog State Tracking Challenge 1 (Williams et al., 2013) \u2014 bus information (968 utterances). In order to avoid incomplete/elliptical phrases, we only took the \ufb01rst user\u2019s utterances from the dialogs. For segment-level OOD, we mined utterances with the explicit af\ufb01rmation of a mistake from Twitter and Reddit conversations datasets (e.g. \u201cmy mistake\u201d, \u201cI\u2019m so sorry\u201d) \u2014 701 and 500 utterances respectively. Our datasets, as well as the tools for OOD-augmentation of arbitrary datasets of interest are openly available4. 4 A data-ef\ufb01cient technique for training robust dialogue systems 4.1 Models In this paper, we experiment with Hybrid Code Network family of models (Williams et al., 2017). HCN is reported to be state-of-the-art for the original, IND-only bAbI Dialog Task 6 data. Thus, in this paper we experiment with it and explore its robustness to OOD input. HCN is a hierarchical dialog control model with a turn-level and a dialog-level components (we will call them both encoders). The turn-level encoder produces a latent representation of a single 3We experimented with other values of pood_start and pood_cont but didn\u2019t see signi\ufb01cant differences in the results. Further experiments for different domains are encouraged using the tools provided 4See https://github.com/ishalyminov/ood_robust_hcn 3 \fdialog turn, and the dialog-level one augments it with additional dialog-level features such as binary indicators of which slot values have been provided and whether the latest API call returned any results. Dialog-level encoder (RNN-based for all the models described) outputs a latent representation of the entire dialog which is then fed into the predictor MLP. Its output is the sequence of dialog actions (restricted by binary action masks provided by domain experts). Our models are described below \u2014 they share the same dialog-level encoder and predictor. The differences are on the turn level and in the overall optimization objective (see Figure 1 for an illustration). HCN \u2014 the original model introduced by (Williams et al., 2017). Its encoding of the user\u2019s input turn x consisting of N tokens is as follows: HCN(x) = 1 N X i w2v(xi) (1) where w2v is the pre-trained Google News word2vec embeddings (frozen at the training time). HCN\u2019s optimization objective is categorical cross-entropy with respect to log-likelihood (here and in Eq. 5 we show maximization objectives for simplicity. In the actual implementation, they are minimized with their sign reversed): LHCN = log p(a|x, c) (2) where a is the dialog action and c is dialog context. Hierarchical HCN (HHCN) uses an RNN (in our case an LSTM cell (Hochreiter and Schmidhuber, 1997)) for encoding each utterance: HHCN(x) = LSTM(x) (3) The optimization objective is the same as of HCN. Variants of this model were described by (Lee, 2017) and (Liang and Yang, 2018). Variational HCN (VHCN) which, to the best of our knowledge, is presented here for the \ufb01rst time \u2014 uses a Variational Autoencoder as the turn-level encoder, so that the resulting turn encoding is VAE\u2019s latent variable (normally referred to as z): V HCN(x) = \u00b5(LSTM(x)) + \u03c3(LSTM(x)) \u2217N(0, 1) (4) Where \u00b5 and \u03c3 are MLPs for predicting z\u2019s posterior distribution parameters, and N(0, 1) is a sample from its prior distribution, a standard Gaussian (Bowman et al., 2016). This model differs from the previous two in that it learns dialog control and autoencoding jointly. In order to keep the secondary task less complex than the main one, we represent VAE\u2019s reconstruction targets as bags of words (BoW). Thus, VHCN optimization objective is as follows: LV HCN = Eq(z)[log(p(a | z, c))] + Eq(z)[p(xBoW | z)] \u2212KL(q(z | x) || p(z)) (5) In the above formula, the \ufb01rst term is the main task\u2019s log-likelihood of the dialog action a, the second one is the VAE\u2019s reconstruction term for the user input in the bag-of-words form xBoW , and the last turn is KL-divergence between the prior and posterior distribution of the VAE\u2019s latent variable z \u2014 following (Bowman et al., 2016), we compute it in a closed form. Another bene\ufb01t of the BoW loss is, as reported in (Zhao et al., 2017), it helps keep the variational properties of the model (i.e. non-zero KL-term) without the necessity of using the KL-term annealing trick (Bowman et al., 2016) which is itself challenging to control in practice. Unlike the authors of the original BoW loss approach, we don\u2019t stack softmax cross-entropy losses for each token and instead use a single sigmoid cross-entropy loss for the entire BoW vector. All the models above use the same dialog-level LSTM encoder with additional features concatenated to the turn representations: BoW turn features, dialog context features, and previous system action5. 5Without the loss of the architecture generality, we have action mask vectors as additional features for the dialog-level LSTM (Williams et al., 2017), but they don\u2019t convey any information and are always set to 1\u2019s 4 \fModel bAbI Dialog Task 6 bAbI Dialog Task 6 + OOD Overall acc. Overall acc. Seg. OOD acc. OOD acc. OOD F1 HCN 0.557 0.438 0.455 0.0 0.0 HHCN 0.531 0.418 0.424 0.0 0.0 VHCN 0.533 0.413 0.413 0.0 0.0 TD-HCN 0.563 0.575 0.257 0.754 0.743 TD-HHCN 0.505 0.455 0.435 0.274 0.418 TD-VHCN 0.565 0.545 0.407 0.530 0.667 Table 2: Evaluation results 4.2 Turn dropout In order to train a system robust to OOD in the absence of real OOD examples, we employ a negative sampling-based approach and generate them synthetically from available IND data with a technique we call turn dropout. Namely, we replace random dialog turns with synthetic ones, and assign them the fallback action. More formally, our dialog features are as follows: < f_turn, f_ctx, f_mask, a >, i.e. turn features (token sequences), dialog context features, action masks, and target actions respectively. Under turn dropout, for a randomly selected dialog i and its turn j, we replace f_turnij with a sequence of random vocabulary words (drawn from a uniform distribution over the vocabulary) and UNK tokens, and corresponding aij with the fallback action, and leave all other features intact. In this way, we\u2019re simulating anomalous turns for the system given usual contexts (as stored in the dialog RNN\u2019s state), and we put minimum assumptions on the synthesized turns\u2019 structure (we only limit their lengths to be within the bounds of the real utterances). 5 Experimental setup and evaluation We train our models only using the original bAbI Dialog Task 6 dataset, and evaluate them on our OOD-augmented versions of it: we use the per-utterance accuracy as our main evaluation metric; the models are trained with the same hyperparameters (where applicable) listed in Table 3. The models use the common uni\ufb01ed vocabulary including all words from our datasets (including OOD content): the intuition behind this is as follows: production dialog models often use word embedding matrices with vocabularies signi\ufb01cantly exceeding that of the training data in order to take advantage of additional generalization power via relations like synonymy, hyponymy, or hypernymy normally ef\ufb01ciently handled by distributed word representations. Therefore, mapping every unseen word to an \u2018UNK\u2019 doesn\u2019t quite re\ufb02ect that setting. We tuned our models\u2019 hyperparameters using 2-stage grid search, tracking the development set accuracy. At the \ufb01rst stage, we adjusted the embedding dimensionality of our models (and the latent variable size in case of VHCN). Then, given the values found, at the second stage we adjusted turn dropout ratio at the interval [0.05 \u22120.7]. Exact hyperparameter values are detailed in Table 3. The results are shown in Table 2 \u2014 please note, apart from the accuracies we report OOD F1-measure, a metric showing the model\u2019s performance as a conventional OOD detector, with positive class being the fallback action, and negative \u2014 all the IND classes actions. Finally, given the stochastic nature of VHCN, we reported its mean accuracy scores over 3 runs (we used the same criterion for selecting the best model during the training procedure). 6 Discussion and future work In this paper, we explored the problem of robustness of neural dialog systems to OOD input. Speci\ufb01cally, we presented a dataset for studying this problem along with a general procedure for augmenting arbitrary datasets of interest for such purpose. Secondly, we introduced turn dropout, a simple yet ef\ufb01cient technique for improving OOD robustness of dialog control models and evaluated its effect on several Hybrid Code Network-family models. 5 \fAs our experiments showed, while learning to handle both IND and OOD input with access to IND-only data at the training time, there appears the following trade-off: a model performing better on the \u2018clean\u2019 test turns is prone to lower accuracy on OOD \u2014 it can be said that it slightly over\ufb01ts to its devset. On the other hand, a model regularized with turn dropout during training naturally performs better on unseen OOD turns, but with not as high accuracy on its \u2018clean\u2019, IND test data. Another side of the trade-off is the accuracy of OOD detection vs robust handling of IND input with segment-level noise. As our results showed, models speci\ufb01cally trained for OOD detection all demonstrate lower accuracy on the noisy IND. Among the models we evaluated, it\u2019s worth noting that the original HCN demonstrated the best performance as an OOD detector (more than 74% F1-score) and thus overall IND + OOD accuracy on the augmented dataset \u2014 more than 57%. While some parts of its architecture (e.g. mean vector-based turn encoding or bag-of-words feature vector at the utterance level) may not seem to be the most robust solution, the model demonstrate superior overall performance. Averaging at the turn level instead of recurrent encoding (the case of HHCN and VHCN) makes the model less dependent on actual word sequences seen during training but on the keywords themselves. In turn, VHCN demonstrated superior performance on IND data when trained with turn dropout, more than 56% \u2014 it bene\ufb01ted in terms of both overall accuracy and the absence of false-positive OODs thus outperforming the original HCN as reported by (Williams et al., 2017). An additional challenge was to train it while keeping its variational properties (i.e. reasonably high KL term) \u2014 the BoW reconstruction loss which we used in order to simplify the secondary task, helped with this as well (Zhao et al., 2017). On the other hand, while achieving superior performance on clean data, VHCN\u2019s properties didn\u2019t result in OOD handling improvements. The question which is still unanswered is how these techniques apply to the setting of few-shot training. In the practical setup of training dialog systems from minimal data, having access to even medium-sized datasets like bAbI Dialog Task 6 isn\u2019t realistic, and all the initial requirements for the models have to be met only using the minimal training data available. It\u2019s the next step in our research to explore how our techniques apply to this setup and what needs to be done in order to achieve OOD robustness with maximum few-shot data ef\ufb01ciency."
+ },
+ {
+ "url": "http://arxiv.org/abs/1811.00967v1",
+ "title": "Neural Response Ranking for Social Conversation: A Data-Efficient Approach",
+ "abstract": "The overall objective of 'social' dialogue systems is to support engaging,\nentertaining, and lengthy conversations on a wide variety of topics, including\nsocial chit-chat. Apart from raw dialogue data, user-provided ratings are the\nmost common signal used to train such systems to produce engaging responses. In\nthis paper we show that social dialogue systems can be trained effectively from\nraw unannotated data. Using a dataset of real conversations collected in the\n2017 Alexa Prize challenge, we developed a neural ranker for selecting 'good'\nsystem responses to user utterances, i.e. responses which are likely to lead to\nlong and engaging conversations. We show that (1) our neural ranker\nconsistently outperforms several strong baselines when trained to optimise for\nuser ratings; (2) when trained on larger amounts of data and only using\nconversation length as the objective, the ranker performs better than the one\ntrained using ratings -- ultimately reaching a Precision@1 of 0.87. This\nadvance will make data collection for social conversational agents simpler and\nless expensive in the future.",
+ "authors": "Igor Shalyminov, Ond\u0159ej Du\u0161ek, Oliver Lemon",
+ "published": "2018-11-02",
+ "updated": "2018-11-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "I.2.7"
+ ],
+ "main_content": "Introduction Chatbots, or socialbots, are dialogue systems aimed at maintaining an open-domain conversation with the user spanning a wide range of topics, with the main objective of being engaging, entertaining, and natural. Under one of the current approaches to such systems, the bot ensemble (Serban et al., 2017; Yu et al., 2016; Song et al., 2016), a collection, or ensemble, of different bots is used, each of which proposes a candidate response to the user\u2019s input, and a response ranker selects the best 1Code and trained models are available at https://github.com/WattSocialBot/alana_ learning_to_rank response for the \ufb01nal system output to be uttered to the user. In this paper, we focus on the task of \ufb01nding the best supervision signal for training a response ranker for ensemble systems. Our contribution is twofold: \ufb01rst, we present a neural ranker for ensemble-based dialogue systems and evaluate its level of performance using an annotation type which is often used in open-domain dialogue and was provided to the Alexa Prize 2017 participants by Amazon (Ram et al., 2017): per-dialogue user ratings. Second and most importantly, we explore an alternative way of assessing social conversations simply via their length, thus removing the need for any user-provided ratings. 2 Data Ef\ufb01ciency in Social Dialogue 2.1 The Need for Data Ef\ufb01ciency It is well known that deep learning models are highly data-dependent, but there are currently no openly available data sources which can provide enough high-quality open-domain social dialogues for building a production-level socialbot. Therefore, a common way to get the necessary data is to collect it on a crowdsourcing platform (Krause et al., 2017). Based on the model type and the development stage, it may be necessary to collect either whole dialogues, or some form of human feedback on how good a particular dialogue or turn is. However, both kinds of data are time-consuming and expensive to collect. The data ef\ufb01ciency of a dialogue model can be split into two parts accordingly: \u2022 sample ef\ufb01ciency \u2013 the number of data points needed for the model to train. As such, it is useful to specify an order of magnitude of the training set size for different types of machine learning models; \u2022 annotation ef\ufb01ciency \u2013 the amount of annotation arXiv:1811.00967v1 [cs.CL] 2 Nov 2018 \fVariables Pearson corr. coef\ufb01cient rating/length 0.11 rating/positive feedback 0.11 rating/negative feedback 0.04 length/positive feedback 0.67 length/negative feedback 0.49 Table 1: Correlation study of key dialogue aspects effort needed. For instance, traditional goaloriented dialogue system architectures normally require intent, slot value, and dialogue state annotation (e.g. Young et al., 2010), whereas end-to-end conversational models work simply with raw text transcriptions (e.g. Vinyals and Le, 2015). 2.2 Alexa Prize Ratings The 2017 Alexa Prize challenge made it possible to collect large numbers of dialogues between real users of Amazon Echo devices and various chatbots. The only annotation collected was perdialogue ratings elicited at the end of conversations by asking the user \u201cOn a scale of 1 to 5, how much would you like to speak with this bot again\u201d (Venkatesh et al., 2017). Less than 50% of conversations were actually rated; the rest were quit without the user giving a score. In addition, note that a single rating is applied to an entire conversation (rather than individual turns), which may consist of very many utterances. The conversations in the challenge were about 2.5 minutes long on average, and about 10% of conversations were over 10 minutes long (Ram et al., 2017) \u2013 this makes the ratings very sparse. Finally, the ratings are noisy \u2013 some dialogues which are clearly bad can get good ratings from some users, and vice-versa. Given the main objective of social dialogue stated in the Alexa Prize rules as \u2018long and engaging\u2019 conversation, we tried to verify an assumption that user ratings re\ufb02ect these properties of the dialogue. Apart from our observations above, we performed a correlation analysis of user ratings and aspects of dialogue directly re\ufb02ecting the objective: dialogue length and explicit user feedback (see Table 1). Although we have a signi\ufb01cant number of dialogues which are both long and highly rated, the correlation analysis was not able to show any relationship between dialogue length and rating. Neither are ratings correlated with user feedback (see Section 6 for the details of user feedback collection). On the other hand, we found a promisFigure 1: Neural ranker architecture ing moderate correlation between the conversation length and explicit positive feedback from users (speci\ufb01cally, the number of dialogue turns containing it). The respective length/negative feedback relationship is slightly weaker. Therefore, we experiment with conversation length for approximating user satisfaction and engagement and use it as an alternative measure of dialogue quality. This allows us to take advantage of all conversations, not just those rated by users, for training a ranker. While some conversations might be long but not engaging (e.g. if there are a lot of misunderstandings, corrections, and speech recognition errors), training a ranker only using length makes it extremely annotation-ef\ufb01cient. 3 A neural ranker for open-domain conversation The ranker described here is part of Alana, HeriotWatt University\u2019s Alexa Prize 2017 \ufb01nalist socialbot (Papaioannou et al., 2017). Alana is an ensemble-based model incorporating informationretrieval-based bots with news content and information on a wide range of topics from Wikipedia, a question answering system, and rule-based bots for various purposes, from amusing users with fun facts to providing a consistent persona. The rulebased bots are also required to handle sensitive issues which can be raised by real users, such as medical, \ufb01nancial, and legal advice, as well as profanities. \f3.1 Ranker architecture The architecture of our ranker is shown in Figure 1. The inputs to the model are 1-hot vectors of a candidate response and the current dialogue context (we use the 3 most recent system and user turns). They are encoded into a latent representation using a single shared RNN encoder based on GRU cells (Cho et al., 2014). The context embedding vectors are then summed up and concatenated with the response embedding (Eq. 1): Enc(C, r) = X i RNN(Ci) \u2295RNN(r) (1) where C is the dialogue context and r is a response candidate. The context and the response are represented using combined word-agent tokens (where agent is either a speci\ufb01c bot from the ensemble or the user) and are concatenated with the lists of named entities extracted using Stanford NER (Finkel et al., 2005). All the word-agent tokens and named entities share the same uni\ufb01ed vocabulary. Encoder outputs, along with additional dialogue features such as context and response sentiment, timestamp, and bot names in the context and the response, go into the Predictor, a feed-forward neural network (MLP) whose output is the resulting rating (Eq. 2): Pred(C, r) = \u03c3(L(Sem(C, r) \u2295f(C, r))) (2) where: L(x) = ReLU(Mx + b) is the layer used in the Predictor (the number of such layers is a model parameter), Sem = L(Enc(C, r)) is the vector of semantic context-response features, and f(C, r) is a vector of the additional dialogue features listed above. We use ReLU activation for the hidden layers because it is known to be highly ef\ufb01cient with deep architectures (Glorot et al., 2011). Finally, we use sigmoid activation \u03c3 for generating the \ufb01nal prediction in the range [0, 1]. 3.2 Training method We use either dialogue rating or length as the prediction target (as discussed in Sections 5 and 6). The model is trained to minimize the Mean Squared Error (MSE) loss against the target using the Adagrad optimizer (Duchi et al., 2011). In our training setup, the model learns to predict per-turn target values. However, since only per-dialogue ones are available in the data, we use the following approximation: the target value of a contextresponse pair is the target value of the dialogue containing it. The intuition behind this is an assumption that the majority of turns in \u201cgood\u201d dialogues (either lengthor rating-wise) are \u201cgood\u201d in their local contexts as well \u2013 so that given a large number of dialogues, the most successful and unsuccessful turns will emerge from the corresponding dialogues. 4 Baselines We compare our neural ranker to two other models also developed during the competition: handcrafted and linear rankers \u2014 all three were deployed live in the Alana Alexa Prize 2017 \ufb01nalist system (Papaioannou et al., 2017), and were therefore of suf\ufb01cient quality for a production system receiving thousands of calls per day. We also compare our model to a recently published dualencoder response selection model by Lu et al. (2017) based on an approach principally close to ours. 4.1 Handcrafted ranker In the handcrafted approach, several turn-level and dialogue-level features are calculated, and a linear combination of those feature values with manually adjusted coef\ufb01cients is used to predict the \ufb01nal ranking. The list of features includes: \u2022 coherence, information \ufb02ow, and dullness as de\ufb01ned by Li et al. (2016); \u2022 overlap between the context and the response with regards to named entities and noun phrases; \u2022 topic divergence between the context turns and the response \u2013 topics are represented using the Latent Dirichlet Allocation (LDA) model (Hoffman et al., 2010); \u2022 sentiment polarity, as computed by the NLTK Vader sentiment analyser (Gilbert and Hutto, 2014).2 4.2 Linear ranker The linear ranker is based on the VowpalWabbit (VW) linear model (Agarwal et al., 2014). We use 2http://www.nltk.org/howto/sentiment. html \fthe MSE loss function and the following features in our VW ranker model: \u2022 bag-of-n-grams from the dialogue context (preceding 3 utterances) and the response, \u2022 position-speci\ufb01c n-grams at the beginning of the context and the response (\ufb01rst 5 positions), \u2022 dialogue \ufb02ow features (Li et al., 2016), the same as for the handcrafted ranker, \u2022 bot name, from the set of bots in the ensemble. 4.3 Dual-encoder ranker The closest architecture to our neural ranker is that of (Lu et al., 2017), who use a dual-encoder LSTM with a predictor MLP for task-oriented dialogue in closed domains. Unlike this work, they do not use named entities, sentiment, or other input features than basic word embeddings. Dialogue context is not modelled explicitly either, and is limited to a single user turn. We reproduced their architecture and set its parameters to the best ones reported in the original paper. 5 Training data Our data is transcripts of conversations between our socialbot and real users of the Amazon Echo collected over the challenge period, February\u2013 December 2017. The dataset consists of over 200,000 dialogues (5,000,000+ turns) from which over 100,000 dialogues (totalling nearly 3,000,000 turns) are annotated with ratings. From this data, we sampled two datasets of matching size for training our rankers, using the per-turn target value approximation described in Section 3.2 \u2013 the Length and Rating datasets for the respective versions of rankers. The target values (length/rating) in both sets are normalized into the [0, 1] range, and the Length set contains context-response pairs from long dialogues (target value above 0.7) as positive instances and context-response pairs from short dialogues (target value below 0.3) as negative ones. With the same selection criteria, the Rating set contains context-response pairs from highly rated dialogues (ratings 4 and 5) as positive instances and context-response pairs from lowrated dialogues (ratings 1 and 2) as negative ones. Both datasets contain 500,000 instances in total, with equal proportion of positive and negative instances. We use a 8:1:1 split for training, development, and test sets. Prior to creating both datasets, we \ufb01ltered out of the dialogue transcripts all system turns which cannot be treated as natural social interaction (e.g. a quiz game) as well as outliers (interaction length \u226595th percentile or less than 3 turns long).3 Thresholds of 0.3 and 0.7 were set heuristically based on preliminary data analysis. On the one hand, these values provide contrastive-enough ratings (e.g. we are not sure whether the rating in the middle of the scale can be interpreted as negative or positive). On the other hand, they allow us to get enough training data for both Length and Rating datasets.4 6 Evaluation and experimental setup In order to tune the neural rankers, we performed a grid search over the shared encoder GRU layer size and the Predictor topology.5 The best con\ufb01gurations are determined by the loss on the development sets. For evaluation, we used an independent dataset. 6.1 Evaluation based on explicit user feedback At the evaluation stage, we check how well the rankers can distinguish between good responses and bad ones. The criterion for \u2018goodness\u2019 that we use here is chosen to be independent from both training signals. Speci\ufb01cally, we collected an evaluation set composed of dialogue turns followed by explicit user feedback, e.g. \u201cgreat, thank you\u201d, \u201cthat was interesting\u201d (we refer to it as the User feedback dataset). Our \u2018bad\u2019 response candidates are randomly sampled across the dataset. The user feedback turns were identi\ufb01ed using sentiment analysis in combination with a whitelist and a blacklist of hand-picked phrases, so that in total we used 605 unique utterances, e.g. \u201cthat\u2019s pretty cool\u201d, \u201cyou\u2019re funny\u201d, \u201cgee thanks\u201d, \u201cinteresting fact\u201d, \u201cfunny alexa you\u2019re funny\u201d. \u2018Goodness\u2019 de\ufb01ned in this way allows us to evaluate how well our two approximated training signals can optimize for the user\u2019s satisfaction as explicitly expressed at the turn level, thus leading 3Some extremely long dialogues are due to users repeating themselves over and over, and so this \ufb01lter removes these bad dialogues from the dataset. Dialogues less than 3 turns long are often where the user accidentally triggered the chatbot. These outliers amounted to about 14% of our data. 4Using more extreme thresholds did not produce enough data while less ones did not provide adequate training signal. 5We tested GRU sizes of 64, 128, 256 and Predictor layers number/sizes of [128], [128, 64], [128, 32, 32]. \fto our desired behaviour, i.e., producing long and engaging dialogues. The User feedback dataset contains 24,982 \u27e8context, good response, bad response\u27e9tuples in total. To evaluate the rankers on this dataset, we use precision@k, which is commonly used for information retrieval system evaluation (Eq. 3). P@k(c, R) = Pk i=1 Relevant(c, Rk) k (3) where c is dialogue context, R is response candidates list, and Relevant is a binary predicate indicating whether a particular response is relevant to the context. Precision is typically used together with recall and F-measure. However, since our dialogue data is extremely sparse so that it is hard to \ufb01nd multiple good responses for the same exact dialogue context, recall and F-measure cannot be applied to this setting. Therefore, since we only perform pairwise ranking, we use precision@1 to check that the good answer is the top-ranked one. Also due to data sparsity, we only perform this evaluation with gold positive responses and sampled negative ones \u2013 it is typically not possible to \ufb01nd a good response with exactly the same context as a given bad response. 6.2 Interim results The results of our \ufb01rst experiment are shown in Table 2. We can see that the neural ranker trained with user ratings clearly outperforms all the alternative approaches in terms of test set loss on its respective dataset as well as pairwise ranking precision on the evaluation dataset. Also note that both versions of the neural ranker stand extremely close to each other on both evaluation criteria, given a much greater gap between them and their nextbest-performing alternatives, the linear rankers. The dual-encoder ranker turned out to be not an ef\ufb01cient model for our problem, partly because it was originally optimized for a different task as reported by Lu et al. (2017). 7 Training on larger amounts of data A major advantage of training on raw dialogue transcripts is data volume: in our case, we have roughly twice as many raw dialogues as rated ones (cf. Section 5). This situation is very common in Model P@1 (eval set) Loss (test set) Handcrafted 0.478 \u2014 VowpalWabbit@length 0.742 0.199 VowpalWabbit@rating 0.773 0.202 DualEncoder@length 0.365 0.239 DualEncoder@rating 0.584 0.247 Neural@length 0.824 0.139 Neural@rating 0.847 0.138 Table 2: Ranking models evaluation: pairwise ranking precision on the independent User feedback dataset and loss on the Length/Rating test sets (Section 5) for the corresponding trainset sizes of 500,000. 5 \u00b7 105 6 \u00b7 105 7 \u00b7 105 8 \u00b7 105 9 \u00b7 105 1 \u00b7 106 0.7 0.8 0.9 1 Trainset size Precision@1 Neural@rating (baseline) VW@rating (baseline) Neural@length VW@length Figure 2: Comparison of rankers trained on extended datasets data-driven development: since data annotation is a very expensive and slow procedure, almost always there is signi\ufb01cantly more raw data than annotated data of a high quality. To illustrate this, we collected extended training datasets of raw dialogues of up to 1,000,000 data points for training from the length signal. We trained our neural ranker and the VW ranker using the same con\ufb01guration as in Section 6.6 The results are shown in Figure 2, where we see that the neural ranker trained on the length signal consistently outperform the ratings-based one. Its trend, although \ufb02uctuating, is more stable than that of VW \u2013 we believe that this is due to VW\u2019s inherent lower model capacity as well as its training setup, which is mainly optimised for speed. The \ufb01gure also shows that VW@length is worse than VW@rating, regardless of training data size. 8 Discussion and future work Our evaluation results show that the neural ranker presented above is an ef\ufb01cient approach to re6We were not able to train the dual encoder ranker on all the extended datasets due to the time constraints. \fsponse ranking for social conversation. On a medium-sized training set, the two versions of the neural ranker, length and ratings-based, showed strongly superior performance to three alternative ranking approaches, and performed competitively with each other. Furthermore, the experiment with extended training sets shows that the accuracy of the length-based neural ranker grows steadily given more unannotated training data, outperforming the rating-based ranker with only slightly larger training sets. The overall results of our experiments con\ufb01rm that dialogue length, even approximated in quite a straightforward way, provides a suf\ufb01cient supervision signal for training a ranker for a social conversation model. In future work, we will attempt to further improve the model using the same data in an adversarial setup following Wang et al. (2017). We also plan to directly train our model for pairwise ranking in the fashion of Burges et al. (2005) instead of the current pointwise approach. Finally, we are going to employ contextual sampling of negative responses using approximate nearest neighbour search (Johnson et al., 2017) in order to perform a more ef\ufb01cient pairwise training. 9 Related work Work on response ranking for conversational systems has been been growing rapidly in recent years. Some authors employ ranking based on heuristically de\ufb01ned measures: Yu et al. (2015, 2016) use a heuristic based on keyword matching, part-of-speech \ufb01lters, and Word2Vec similarity. (Krause et al., 2017) apply standard information retrieval metrics (TF-IDF) with importance weighting for named entities. However, most of the recent research attempts to train the ranking function from large amounts of conversational data, as we do. Some authors use task-based conversations, such as IT forums (Lowe et al., 2015) or customer services (Lu et al., 2017; Kumar et al., 2018), while others focus on online conversations on social media (e.g. Wu et al., 2016; Al-Rfou et al., 2016). The basic approach to learning the ranking function in most recent work is the same (e.g. Lowe et al., 2015; Al-Rfou et al., 2016; Wu et al., 2016): the predictor is taught to rank positive responses taken from real dialogue data higher than randomly sampled negative examples. Some of the approaches do not even include rich dialogue contexts and use only immediate context-response pairs for ranking (Ji et al., 2014; Yan et al., 2016; Lu et al., 2017). Some authors improve upon this basic scenario: Zhuang et al. (2018) take a desired emotion of the response into account; Liu et al. (2017) focus on the engagement of responses based on Reddit comments rating; Fedorenko et al. (2017) train the ranking model in several iterations, using highly ranked incorrect responses as negative examples for the next iteration. Nevertheless, to our knowledge, none of the prior works attempt to optimise for long-term dialogue quality; unlike in our work, their only ranking criterion is focused on the immediate response. 10 Conclusion We have presented a neural response ranker for open-domain \u2018social\u2019 dialogue systems and described two methods for training it using common supervision signals coming from conversational data: user-provided ratings and dialogue length. We demonstrated its ef\ufb01ciency by evaluating it using explicit positive feedback as a measure for user engagement. Speci\ufb01cally, trained on ratings, our neural ranker consistently outperforms several strong baselines; moreover, given larger amounts of data and only using conversation length as the objective, the ranker performs better the ratings-based one, reaching 0.87 Precision@1. This shows that conversation length can be used as an optimisation objective for generating engaging social dialogues, which means that we no longer need the expensive and time-consuming procedure of collecting per-dialogue user ratings, as was done for example in the Alexa Prize 2017 and is common practice in conversational AI research. Per-turn user ratings may still be valuable to collect for such systems, but these are even more expensive and problematic to obtain. Looking ahead, this advance will make data collection for social conversational agents simpler and less expensive in the future. Acknowledgements This research received funding from the EPSRC project MaDrIgAL (EP/N017536/1). The Titan Xp used for this research was donated by the NVIDIA Corporation."
+ },
+ {
+ "url": "http://arxiv.org/abs/1810.03352v1",
+ "title": "Multi-Task Learning for Domain-General Spoken Disfluency Detection in Dialogue Systems",
+ "abstract": "Spontaneous spoken dialogue is often disfluent, containing pauses,\nhesitations, self-corrections and false starts. Processing such phenomena is\nessential in understanding a speaker's intended meaning and controlling the\nflow of the conversation. Furthermore, this processing needs to be word-by-word\nincremental to allow further downstream processing to begin as early as\npossible in order to handle real spontaneous human conversational behaviour.\n In addition, from a developer's point of view, it is highly desirable to be\nable to develop systems which can be trained from `clean' examples while also\nable to generalise to the very diverse disfluent variations on the same data --\nthereby enhancing both data-efficiency and robustness. In this paper, we\npresent a multi-task LSTM-based model for incremental detection of disfluency\nstructure, which can be hooked up to any component for incremental\ninterpretation (e.g. an incremental semantic parser), or else simply used to\n`clean up' the current utterance as it is being produced.\n We train the system on the Switchboard Dialogue Acts (SWDA) corpus and\npresent its accuracy on this dataset. Our model outperforms prior neural\nnetwork-based incremental approaches by about 10 percentage points on SWDA\nwhile employing a simpler architecture. To test the model's generalisation\npotential, we evaluate the same model on the bAbI+ dataset, without any\nadditional training. bAbI+ is a dataset of synthesised goal-oriented dialogues\nwhere we control the distribution of disfluencies and their types. This shows\nthat our approach has good generalisation potential, and sheds more light on\nwhich types of disfluency might be amenable to domain-general processing.",
+ "authors": "Igor Shalyminov, Arash Eshghi, Oliver Lemon",
+ "published": "2018-10-08",
+ "updated": "2018-10-08",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "I.2.7"
+ ],
+ "main_content": "Introduction It is uncontested that humans process (parse and generate) language, incrementally, word by word, rather than turn by turn, or sentence by sentence (Howes et al., 2010; Crocker et al., 2000; Ferreira et al., 2004). This leads to many characteristic phenomena in spontaneous dialogue that are dif\ufb01cult to capture in traditional linguistic approaches and are still largely ignored by dialogue system developers. These include various kinds of context-dependent fragment (Fern\u00b4 andez and Ginzburg, 2002; Fern\u00b4 andez, 2006; Kempson et al., 2017), false starts, suggested add-ons, barge-ins and dis\ufb02uencies. In this paper, we focus on dis\ufb02uencies: pauses, hesitations, false starts and self-corrections that are common in natural spoken dialogue. These proceed according to a well-established general structure with three phases (Shriberg, 1994): (1) with [Italian | {z } reparandum + {uh} | {z } interregnum Spanish] | {z } repair cuisine Speci\ufb01c dis\ufb02uency structures have been shown to serve different purposes for both the speaker & the hearer (see e.g Brennan and Schober (2001)), for example, a \ufb01lled pause such as \u2018uhm\u2019 can elicit a completion from the interlocutor, but also serve as a turn-holding device; mid-sentence self-corrections are utilised to deal with the speaker\u2019s own error as early as possible, thus minimising effort. In dialogue systems, the detection, processing & integration of dis\ufb02uency structure is thus crucial to understanding the interlocutor\u2019s intended meaning (i.e. robust Natural Language Understanding), but 1Code and trained models available at https://bit.ly/multitask_disfluency arXiv:1810.03352v1 [cs.CL] 8 Oct 2018 \falso for coordinating the \ufb02ow of the interaction. Like dialogue processing in general, the detection & integration of dis\ufb02uencies needs to be strongly incremental: it needs to proceed word by word, enabling downstream processing to begin as early as possible, leading to more ef\ufb01cient and more naturally interactive dialogue systems (Skantze and Hjalmarsson, 2010; Schlangen and Skantze, 2009). Furthermore, incremental dis\ufb02uency detection needs to proceed with minimal latency & commit to hypotheses as early as possible in order to avoid \u2018jittering\u2019 in the output and having to undo the downstream processes started based on erroneous hypotheses (Schlangen and Skantze, 2009; Hough and Purver, 2014; Hough and Schlangen, 2015) . While many current data-driven dialogue systems tend to be trained end-to-end on natural data, they don\u2019t normally take the existence of dis\ufb02uencies into account. Recent experiments have shown that endto-end dialogue models such as Memory Networks (MemN2N) (Bordes et al., 2017) need impractically large amounts of training data containing dis\ufb02uencies and with suf\ufb01cient variation in order to obtain reasonable performance (Eshghi et al., 2017; Shalyminov et al., 2017). The problem is that, taken together with the particular syntactic and semantic contexts in which they occur, dis\ufb02uencies are very sparsely distributed, which leads to a large mismatch between the training data and actual real-world spontaneous user input to a deployed system. This suggests a more modular, pipelined approach, where dis\ufb02uencies are detected and processed by a separate, domain-general module, and only then any resulting representations are passed on for downstream processing. The upshot of such a modular approach would be a major advantage in generality, robustness, and data-ef\ufb01ciency. In this paper, we build on the state-of-the-art neural models of Hough and Schlangen (2015) and Schlangen and Hough (2017). Our contributions are that: (1) we produce a new, multi-task LSTM-based model with a simpler architecture for incremental dis\ufb02uency detection, with signi\ufb01cantly improved performance on the SWDA, a dis\ufb02uency-tagged corpus of open-domain conversations; and (2) we perform a generalisation experiment measuring how well the models perform on unseen data using the controlled environment of bAbI+ (Eshghi et al., 2017), a synthetic dataset of goal-oriented dialogues in a restaurant search domain augmented with spoken dis\ufb02uencies. 2 Related work Work on dis\ufb02uency detection has a long history, going back to Charniak and Johnson (2001) who set the challenge. One of the important dividing lines through this work is the incrementality aspect, i.e. whether dis\ufb02uency structure is predicted word by word. In the non-incremental setting, as the problem is essentially sequence tagging, neural models have been widely used. As such, there are approaches using an encoder-decoder model (seq2seq) with attention (Wang et al., 2016) and a Stack-LSTM model working as a buffer of a transition-based parser (Wang et al., 2016; Wang et al., 2017), the latter being state-of-the-art for the non-incremental setting. Incremental, online processing of dis\ufb02uencies is a more challenging task, if only because there is much less information available for tagging, viz. only the context on the left. In a practical system, it also involves extra constraints and evaluation criteria such as minimal latency and revisions to past hypotheses which lead to \u2018jittering\u2019 in the output with all the dependent downstream processes having to be undone, thus impeding ef\ufb01ciency (see the illuminating discussions in Hough and Purver (2014) and Purver et al. (2018)). Incremental dis\ufb02uency detection models include Hough and Purver (2014) who approach the problem information-theoretically, using local surprisal/entropy measures and a pipeline of classi\ufb01ers for recognition of the various components of dis\ufb02uency structure. While the model is very effective, it leaves one desiring a simpler alternative. This was made possible after the overall success of RNN-based models, which Hough and Schlangen (2015) exploit. We build on this model here, as well as evaluate it further (see below). On the other hand, Schlangen and Hough (2017) tackle the task of joint dis\ufb02uency prediction and utterance segmentation, and demonstrate that the two tasks interact and thus are better approached jointly. Language models have been extensively used for improving neural models\u2019 performance. For example, Peters et al. (2018) showed that a pre-trained language model improves RNN-based models\u2019 \fFigure 1: Multi-task LSTM model architecture performance in a number of NLP tasks \u2014 either as the main feature representation for the downstream model, or as additional information in the form of a latent vector in the intermediate layers of complex models. The latter way was also employed by Peters et al. (2017) in the task of sequence labeling. Finally, a multitask setup with language modelling as the second objective \u2013 the closest to our approach \u2013 was used by Rei (2017) to improve the performance of RNN-based Name Entity Recognition. We note that there is no previous approach to multitask dis\ufb02uency detection using a secondary task as general and versatile as language modelling. Furthermore, none of the works mentioned study how well their models generalise across datasets, nor do they shed much light on what kinds of dis\ufb02uency structure are harder to detect, and why, as we try to do below. 3 Dis\ufb02uency detection model Our approach to dis\ufb02uency detection is a sequence tagging model which makes single-word predictions given context words wt\u2212n+1, ..., wt of a maximum length n. We train it to perform two tasks jointly (c.f. Hough and Schlangen (2015)): (1) predicting the dis\ufb02uency tag of the current word, P(yt|wt\u2212n+1, ..., wt); and (2) predicting the next word in the sequence in a language model way, P(wt+1|wt\u2212n+1, ..., wt). At training time, we optimise the two tasks jointly, but at test time we only look at the resulting tags and ignore the LM predictions. Our model uses a shared LSTM encoder (Hochreiter and Schmidhuber, 1997) with combined word/POS-tag tokens which provides context embedding for two independent multilayer perceptrons (MLPs) making the predictions for the two tasks. The combined token vocabulary (word+POS) size for the SWDA dataset is approximately 30% larger than the original word-only version \u2014 given this, concatenation is the simplest and most ef\ufb01cient way to pass part-of-speech information into the model. The intuition behind adding an additional task to optimise for is that it serves as a natural regulariser: given an imbalanced label distribution (see Section 4 for the dataset description), only learning dis\ufb02uency labels may lead to a higher degree of over\ufb01tting, and introducing an additional task with more uniformly distributed labels can help the model generalise better. Other potential bene\ufb01ts of having the model work as an LM is the possibility of unsupervised model improvements, e.g. pre-training of the model\u2019s LM part from larger text corpora or 1-shot \ufb01ne-tuning to new datasets with different word sequence patterns. In order to address the problem of signi\ufb01cantly imbalanced training data (the majority of the words \fin the corpus are \ufb02uent), we use a weighted cross-entropy loss in which the weight of a data point is inversely proportional to its label\u2019s frequency in the training set. Our overall loss function is of the form: L = WLmain + \u03b1Llm + \u03bb 2 X i w2 i \u2013 where WLmain and Llm are respective losses for the dis\ufb02uency tagging (class-weighted) and language modeling tasks (LM loss coef\ufb01cient \u03b1 is tuned empirically). The last term is L2 regularisation which we apply to the model\u2019s weight parameters wi (those of word embeddings, LSTM gates, and MLPs) leaving all the biases intact. L2 coef\ufb01cient \u03bb is also tuned empirically (see Appendix A for the values of the constants). The model is implemented in Tensor\ufb02ow (Abadi et al., 2015) and is openly available. 4 Dis\ufb02uency datasets and tags 4.1 The Switchboard dataset For training our model, we use the Switchboard Dialog Acts dataset (SWDA) with manually annotated dis\ufb02uency tags (Meteer et al., 1995). We use a pre-processed version of the dataset by Hough and Schlangen (2015) containing 90,497 utterances with transformed tagging: following their convention, there are 27 tags in total consisting of: tag for \ufb02uent tokens; for edit tokens; tags for repair tokens that determine the start of the reparandum to be n tokens/words back; and & tags which mark the end of the repair and classify whether the repair is a substitution or deletion repair. The latter tokens can be combined with tokens, which explains the total of 27 tags see (2) for an example where the repair word, \u2018Spanish\u2019, is tagged as meaning this is a substitution repair that retraces 4 tokens back from the current token. (2) with \u27e8f/\u27e9 [Italian \u27e8f/\u27e9 | {z } reparandum + { uh \u27e8e/\u27e9no \u27e8e/\u27e9uh \u27e8e/\u27e9} | {z } interregnum Spanish] \u27e8rm\u22124\u27e9 \u27e8rpSub\u27e9 | {z } repair cuisine \u27e8f/\u27e9 The distribution of different types of tokens is highly imbalanced: only about 4% of all tokens are involved in dis\ufb02uency structures (the detailed statistics are shown in the Appendix A). See above, Section 3 for how our model deals with this. 4.2 The bAbI+ dataset To evaluate the cross data-set generalisation properties of our model and that of Hough and Schlangen (2015), we employ an additional dataset \u2013 bAbI+ introduced by Shalyminov et al. (2017). bAbI+ is an extension of the original bAbI Task 1 dialogues (Bordes et al., 2017) where different dis\ufb02uency structures \u2013 such as hesitations, restarts, and corrections \u2013 can be mixed in probabilistically. Crucially these can be mixed in with complete control over the syntactic and semantic contexts in which the phenomena appear, and therefore the bAbI+ environment allows controlled, focused experimentation of the effect of different phenomena and their distributions on the performance of different models. Here, we use bAbI+ tools2 to generate new data for the controlled generalisation experiment3 of what kinds of dis\ufb02uency phenomena are captured better by each model. We focus here on the following dis\ufb02uency patterns: \u2022 Hesitations, e.g. as in \u201cwe will be uhm eight\u201d (mixed in are single edit tokens); \u2022 Prepositional Phrase restarts (PP-restart), e.g. \u201cin a in a um in a moderate price range\u201d (repair of a PP at its beginning with or without an interregnum); 2See https://bit.ly/babi_tools 3Data is available at http://bit.ly/babi_plus_disfluencies_study \fModel Fe Frm Frps (Hough and Schlangen, 2015) 0.902 0.711 0.689 (Schlangen and Hough, 2017) 0.918 \u2014 0.719 LSTM 0.915 0.693 0.775 Multi-task LSTM 0.919 0.753 0.816 Table 1: Evaluation of the dis\ufb02uency tagging models Model hesitations (Fe ) PP restarts CL-restarts Fe Frm Frps Fe Frm Frps (Hough and Schlangen, 2015) 0.917 0.774 0.875 0.877 0.938 0.471 0.630 LSTM 0.956 1.0 0.982 0.993 0.948 0.36 0.495 Multi-task LSTM 0.910 1.0 0.993 0.997 0.991 0.484 0.659 Table 2: Controlled generalisation evaluation \u2022 Clausal restarts (CL-restart), e.g. \u201ccan you make a restaurant uhm yeah can you make a restaurant reservation for four people with french cuisine in a moderate price range\u201d (repair of the utterance from the beginning starting at arbitrary positions); \u2022 Corrections (NP and PP), e.g. \u201cwith Italian sorry Spanish cuisine\u201d, as was initially discussed in Section 1. We generated independent bAbI+ datasets with each dis\ufb02uency type. The dis\ufb02uency phenomena above were chosen to resemble dis\ufb02uency patterns in the original SWDA corpus (see Tables 3, 4, and 5 for examples), as well as intuitive considerations for the phenomena relevant for goal-oriented dialogue (namely, corrections). The intuition for a generalisation experiment with data like this is as follows: while having similar dis\ufb02uency patterns, our bAbI+ utterances differ from SWDA in the vocabulary and the word sequences themselves as they are in the domain of goal-oriented human-computer dialogue \u2014 this property makes it possible to evaluate the generalisation capabilities of a model outside its training domain. 5 Evaluation and experimental setup We employ exactly the same evaluation criteria as Hough and Schlangen (2015): micro-averaged F1scores for edit (Fe) and tokens (Frm) as well as for whole repair structures (Frps). We compare our Multi-task LSTM model to its single-task version (dis\ufb02uency tag predictions only) as well as to the system of Hough and Schlangen (2015) and the joint dis\ufb02uency tagging/utterance segmentation model of Schlangen and Hough (2017) (all of the applicable word-level metrics on dialogue transcripts). These use a hand-crafted Markov Model for post-processing, whereas our model learns in an end-to-end fashion. We train our model using the SGD optimiser and monitor the Frm on the dev set as a stopping criterion. The model\u2019s hyperparameters are tuned heuristically, the \ufb01nal values are listed in the Appendix A. We use class weights in the main task\u2019s loss to deal with the highly imbalanced data, so that the weight of the kth class is calculated as Wk = 1/(Ck)\u03b3, where Ck is the number of kth class instances in the training set, and \u03b3 is a smoothing constant set empirically. 5.1 Results The results are shown in Table 1. Both singleand multi-task LSTM are able to outperform the Hough and Schlangen (2015) model on edit tokens and repair structures, but the multi-task one performs signi\ufb01cantly better on tags and surpasses both previous models. The reason Frps is higher than Frm in general is that due to the tag conversion, \ufb02uent tokens inside reparandums and repairs are treated as part of repair, and they contribute to the global positive and negative counters used in the micro-averaged F1. \fRepair length Repair text Frequency 1 i i i 139 the the the 33 and and and 31 it it it 29 its its its 26 2 it was it was 67 i dont i dont 57 i think i think 44 in the in the 39 do you do you 23 3 a lot of a lot of 7 that was uh that was 5 it was uh it was 5 what do you what do you 4 i i dont i dont 4 Table 3: Most common repairs in SWDA POS pattern Examples repairs % DT NN DT NN this woman this socialite 0.1 a can a garage the school that school JJ NN JJ NN high school high school 0.03 good comedy good humor israeli situation palestinian situation DT UH DT NN that uh that punishment 0.02 the uh the cauli\ufb02ower that uh that adjustment DT NN UH DD NN a friend uh a friend 0.01 a lot uh a lot a lot um a lot NN PRP VBP NN NN ribbon you know hair ribbon 0.01 thing you know motion detector Table 4: SWDA repairs by POS-tag pattern Keyword pattern Examples repairs % sorry * or im sorry no 0.02 um im sorry what thank you im sorry i just got home from work sorry * and he told us theres two sixteen bit slots and two eight bit 0.009 sorry two four sixteen bit slots and two eight bit slots available for the user i mean * i mean 4 i mean yeah i mean uh i mean i i mean * i mean i i 0.5 but i mean whats whats happened here is is is i mean you youve Table 5: SWDA repairs by interregnum Controlled generalisation experiment results are shown in Table 2 \u2014 note that we could only run the model of Hough and Schlangen (2015) on bAbI+ data because that of Schlangen and Hough (2017) works in a setup different from ours. It can be seen that the LSTM tagger is somewhat over\ufb01tted to edit tokens on SWDA. This is the reason it outperforms the Multi-task LSTM on the hesitations dataset and has a tied 1.0 on edit tokens on PP restarts dataset. In all other cases, Multi-task LSTM demonstrates superior generalisation. As for NP/PP self-corrections which are not present in Table 2: none of the systems tested were able to handle these. Evaluation on the this dataset revealed 0.0 accuracy with all systems. We discuss these results below. 6 Discussion and future work We have presented a multi-task LSTM-based dis\ufb02uency detection model which outperforms previous neural network-based incremental models while being signi\ufb01cantly simpler than them. For the \ufb01rst time, we have demonstrated the generalisation potential of a dis\ufb02uency detection model by cross-dataset evaluation. As the results show, all models achieve reasonably high generalisation level on the very local dis\ufb02uency patterns such as hesitations and PP restarts. However, the accuracy drops signi\ufb01cantly on less restricted restarts spanning arbitrary regions of utterances from the beginning. On the majority of those dis\ufb02uency patterns, our model achieves a superior generalisation level. Interestingly, none of the models were able to detect NP or PP corrections such as those often glossed in dis\ufb02uency papers (e.g. \u201cA \ufb02ight to Boston uh I mean to Denver\u201d). The most likely explanation for this could be the extreme sparsity of such dis\ufb02uencies in the SWDA dataset. \fWe performed analysis of SWDA dis\ufb02uencies in order to explore this hypothesis and examined their distribution based on length in tokens and POS-tag sequence patterns of interest. As shown in Tables 3 and 4, the vast majority of dis\ufb02uencies found are just repetitions without speakers actually correcting themselves. This observation is in line with prior studies, showing that the distribution of repair types varies signi\ufb01cantly across domains (Colman and Healey, 2011), modalities (Oviatt, 1995), and gender & age groups (Bortfeld et al., 2001) \u2014 see Purver et al. (2018) for a nice discussion. While this is very likely the correct explanation, we cannot rule out the possibility that such selfcorrections are inherently more dif\ufb01cult to process for particular models that needs a separate experiment that holds frequency of particular repair structures constant in the training data. Addressing this issue is our next step since we designed the multi-task LSTM with this in mind. As such, we will explore possibilities of knowledge transfer to new closed domains in a 1-shot setting, both with regular supervised training and unsupervised LM \ufb01ne-tuning. 7 Acknowledgements We are very grateful to Julian Hough for offerring extensive help with running their 2015 dis\ufb02uency detection model (Hough and Schlangen, 2015) and their SWDA tagging conventions, as well as constructive feedback throughout."
+ },
+ {
+ "url": "http://arxiv.org/abs/1709.07840v1",
+ "title": "Challenging Neural Dialogue Models with Natural Data: Memory Networks Fail on Incremental Phenomena",
+ "abstract": "Natural, spontaneous dialogue proceeds incrementally on a word-by-word basis;\nand it contains many sorts of disfluency such as mid-utterance/sentence\nhesitations, interruptions, and self-corrections. But training data for machine\nlearning approaches to dialogue processing is often either cleaned-up or wholly\nsynthetic in order to avoid such phenomena. The question then arises of how\nwell systems trained on such clean data generalise to real spontaneous\ndialogue, or indeed whether they are trainable at all on naturally occurring\ndialogue data. To answer this question, we created a new corpus called bAbI+ by\nsystematically adding natural spontaneous incremental dialogue phenomena such\nas restarts and self-corrections to the Facebook AI Research's bAbI dialogues\ndataset. We then explore the performance of a state-of-the-art retrieval model,\nMemN2N, on this more natural dataset. Results show that the semantic accuracy\nof the MemN2N model drops drastically; and that although it is in principle\nable to learn to process the constructions in bAbI+, it needs an impractical\namount of training data to do so. Finally, we go on to show that an\nincremental, semantic parser -- DyLan -- shows 100% semantic accuracy on both\nbAbI and bAbI+, highlighting the generalisation properties of linguistically\ninformed dialogue models.",
+ "authors": "Igor Shalyminov, Arash Eshghi, Oliver Lemon",
+ "published": "2017-09-22",
+ "updated": "2017-09-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "I.2.7"
+ ],
+ "main_content": "Introduction A key problem for the practical data-driven (rather than hand-crafted) development of task-oriented dialogue systems is that they are generally turnbased, and so do not support natural, everyday incremental (i.e. word-by-word) dialogue processing. This means that they often cannot process naturally occurring incremental dialogue phenomena such as mid-sentence restarts and self-corrections (Hough, 2015; Howes et al., 2009). Dialogue systems will not be able to make sense of the everyday language produced by users which is replete with pauses, interruptions, self-corrections and other inherently incremental dialogue phenomena, until they incorporate one or another form of incremental language processing. Indeed incremental dialogue systems (i.e. processing word-by-word instead of at utterance/turn boundaries) have previously been empirically shown to be bene\ufb01cial and more natural for users (Aist et al., 2007; Skantze and Hjalmarsson, 2010). In this paper, we explore the performance of the state-of-the-art neural retrieval model of Bordes et al. (2017) on dialogues containing some prototypical incremental dialogue structures. Bordes et al. (2017) initially presented the bAbI dialog tasks dataset aimed at learning goal-oriented dialogue systems in an end-to-end fashion: there are no annotations in the data whatsoever, and the model learns all components of a dialogue system. On this dataset, they report that End-to-End Memory Networks (henceforth MemN2Ns) achieve an impressive 100% performance on a test set of 1000 dialogues, after being trained on 1000 similar dialogues. However, the bAbI dataset is both synthetic and clean: it contains none of the more interesting naturally occurring, dis\ufb02uent phenomena identi\ufb01ed above. To assess the e\ufb00ectiveness of the arXiv:1709.07840v1 [cs.CL] 22 Sep 2017 \fMemN2N model on more natural dialogue data, we introduce an extended, incremental version of the bAbI dataset \u2013 dubbed bAbI+ (see section 2.2) \u2013 which we created by systematically adding self-corrections, hesitations, and restarts to the original bAbI dataset. We go on to explore the performance of MemN2N on this new dataset. The results of our experiments show that the semantic accuracy of MemN2N , measured in terms of how well the model predicts API calls (a non-linguistic action \u2013 in this case querying a data-base with the user\u2019s requirements) at the end of a dialogue segment, drops very signi\ufb01cantly (by about 50%) even when trained on the full bAbI+ dataset. Finally, we compare these results to the methodologically distinct, linguistically informed model of (Eshghi et al., 2017b; Kalatzis et al., 2016), who employ an incremental dialogue parser, DyLan (Eshghi, 2015; Eshghi et al., 2011; Purver et al., 2011); based around the Dynamic Syntax grammar framework (Kempson et al., 2001; Cann et al., 2005)). We show here that there is no drop in performance in the same semantic accuracy metric from bAbI to bAbI+ with both at 100% due to the rich, theoretically-grounded knowledge incorporated within the model. 2 Exploring the performance of MemN2Ns Our focus in this paper is to explore the approach of Bordes et al. (2017), and its performance on spontaneous dialogue data. 2.1 The Dialog bAbI tasks dataset We use Facebook AI Research\u2019s Dialogue bAbI tasks dataset (Bordes et al., 2017). These are goal-oriented dialogues in the domain of restaurant search. In the dataset, there are 6 tasks of increasing complexity ranging from only collecting the user\u2019s preferences on restaurant and up to conducting full dialogues with changes in the user\u2019s goal and providing extra information upon request. The \ufb01rst 5 tasks are \u2018clean\u2019 dialogues composed synthetically and they thus lack the features of natural everyday conversations. Task 6, in turn, is based on real dialogues collected for the Dialog State Tracking Challenge 2. Recent studies have shown di\ufb00erent ways in which MemN2Ns are outperformed: Eric and Manning (2017) introduced the Copy-Augmented Sequence-to-Sequence model that outperforms MemN2N on Task 6; Williams et al. (2017) presented a hybrid RNN + rule-based model trainable in a 2-stage supervised + reinforcement learning setup, outperforming MemN2N on Tasks 5 and 6. However, none of these studies control for the type of complexity that might result in worse performance, and thus do not shed any light on why a particular architecture such as MemN2N might be at a disadvantage. While Task 5 dialogues have the full task complexity, conducting full dialogues with an un\ufb01xed user goal and additional information requests, they are still composed programmatically and contain minimal surface variation. The Task 6 dialogues on the other hand are complex both in terms of the surface variation and the task itself. Here, in order to study the speci\ufb01c e\ufb00ects of incremental variations in dialogue such as conversational dis\ufb02uencies, we focus on Task 1, where in each dialogue the system asks the user about their preferences for the properties of a restaurant, and each dialogue results in an API call containing values of each slot obtained (e.g. food-type=french) \u2013 the ability of predicting the API calls correctly thus provides a direct measure of how a well a particular model can interpret the dialogues. Using the MemN2N model, the approach of Bordes et al. (2017) achieves 100% performance \u2013 measured as per-utterance accuracy including the \ufb01nal API call \u2013 after training on 1000 dialogues. 2.2 The bAbI+ dataset While containing su\ufb03cient lexical variation, the original bAbI Task 1 dialogues signi\ufb01cantly lack incremental and interactional variations vital for natural real-life dialogues. In order to obtain such variation while keeping the controllable environment close to the laboratory conditions that bAbI o\ufb00ers, we created the bAbI+ dataset by systematically transforming the original dataset\u2019s dialogues. bAbI+ is an extension of the bAbI Task 1 dialogues with everyday incremental dialogue phenomena (hesitations, restarts, and corrections \u2013 see below). This extension can be seen as orthogonal to the increasing task complexity which Tasks 2\u20136 o\ufb00er: we instead increase the complexity of surface forms of dialogue utterances, while keeping every other aspect of the task \ufb01xed. Our incremental modi\ufb01cations model the dis\ufb02u\fFigure 1: MemN2N architecture \u2013 (a) single memory hop and (b) multiple memory hops, from (Sukhbaatar et al., 2015) encies and communication problems in everyday spoken interaction in real-world environments. These variations are: \u2022 1. Hesitations, e.g. as in \u201cwe will be uhm eight\u201d; \u2022 2. Restarts, e.g. \u201ccan you make a restaurant uhm yeah can you make a restaurant reservation for four people with french cuisine in a moderate price range\u201d; \u2022 3. Corrections a\ufb00ecting task-speci\ufb01c information \u2013 both short-distance ones correcting one token, e.g. \u201cwith french oh no spanish food\u201d, and long-distance NP/PP-level corrections, e.g. \u201cwith french food uhm sorry with spanish food\u201d, all within a single user utterance, rather than across multiple turns. The phenomena above are mixed in probabilistically from the \ufb01xed sets of templates to the original data2. The modi\ufb01cations a\ufb00ect a total of 11336 utterances in the 3998 dialogues. Around 21% of user\u2019s turns contain corrections, 40% hesitations, and 5% restarts (they are not mutually exclusive, so that an utterance can contain up to 3 modi\ufb01cations). Our modi\ufb01cations, with respect to corrections in particular, are more conservative than those observed in real-world data: (Hough, 2015) reports that self-corrections appear in 20% of all 2See https://github.com/ishalyminov/babi_ tools turns natural conversations from British National Corpus, and in 40% of turns in the Map Task, a corpus of human-human goal-oriented dialogues. Here\u2019s part of an example dialogue in the bAbI+ corpus, showing corrections in all user turns: sys: hello what can I help you with today? usr: I\u2019d like to book a uhm yeah I\u2019d like to book a table in a expensive price range no sorry in a cheap price range sys: I\u2019m on it. Any preference on a type of cuisine? usr: with indian food no sorry with spanish food sys: where should it be? usr: in tokyo sys: how many people would be in your party? usr: for uhm yeah for eight people please sys: ok let me look into some options for you 2.3 Memory Networks We follow Bordes et al.\u2019s setup by using a MemN2N (we took an open source Tensor\ufb02ow implementation for bAbI QA tasks and modi\ufb01ed it3 according to their setup \u2013 see details below). The architecture of a MemN2N is shown in Figure 1 (Sukhbaatar et al., 2015). One of the model\u2019s key features is what can be thought of as an \u201cexplicit memory\u201d component: before the inference process, all the input sentences are written into the memory from which they are then read during the inference. The internal memory representation is as follows: an utterance xi is \ufb01rst vectorized as a \ufb01xed-sized array of 1-hot vectors (zero padding is used), then 3See https://github.com/ishalyminov/memn2n \feach vector is embedded via the matrix A, and \ufb01nally these embeddings are encoded into a single memory vector mi using temporal encoding (it preserves the information of word order in a sentence \u2013 for the details, please refer to (Sukhbaatar et al., 2015)). The same procedure is applied to the user\u2019s input using another embedding matrix B. Another important feature in the MemN2N architecture is reading from memory with attention. With the input sentences and the utterance encoded, the match between each of the memory vectors mi and the utterance u is calculated: pi = S o ftmax(uTmi) This is used as the attention vector over the encoded memories further in the inference process. Next, for the \ufb01nal answer prediction, both attention-weighted memories and user\u2019s utterance are passed through the \ufb01nal weight matrix W: \u02c6 a = S o ftmax(W(o + u)) where o = P i pici is weighted memories. For the QA tasks, the answer \u02c6 a is just an index of a word from the vocabulary. In dialogue tasks, however, answers are the entire utterances, either system utterances (e.g. \u201chow many people would be in your party?\u201d) or API calls (\u201capi_call french london four expensive\u201d). They are still predicted as indices from the answer candidates list, but given that there is e.g. absolutely no overlap in exact api call examples between train and test sets, an internal representation of each candidate answer is added to the architecture (Bordes et al., 2017). Thus, the \ufb01nal step now looks as follows: \u02c6 a = S o ftmax((o + u)T \u00b7 W(y)) where y is a vector of answer candidates processed just as described above for the input sentences, with W as the embedding matrix. The architecture described above may be stacked into several layers called hops (Figure 1 (b)) \u2013 refer for details to (Sukhbaatar et al., 2015); here we\u2019re initially interested in the single hop con\ufb01guration (see the next section), for which (Bordes et al., 2017) report their results. 2.4 Data preprocessing and the MemN2N setup In order to adapt the data for the MemN2N, we transform the dialogues into triplets. The number of triplets for a single dialogue is equal to the number of the system\u2019s turns, and in each triplet, the answer is the current system\u2019s turn, the question is the user\u2019s turn preceding it, and the story is a list of all the previous turns from both sides. Other than that, each sentence in the story gets 2 additional tokens: the number of the turn, and the ID of the speaker (Bordes et al., 2017). We also use the single embedding matrix A for both input memories and the user\u2019s question; the same matrix is used for the output memories representation \u2013 in that we follow (Bordes et al., 2017), and it corresponds to the \u201cAdjacent\u201d weight tying model in (Sukhbaatar et al., 2015). In our setup, there are no out-of-vocabulary words for the model during both training and testing, and for both bAbI and bAbI+ with the maximum sentence length taking account of the increase due to the transformations in bAbI+. We train our MemN2Ns with a Stochastic Gradient Descent optimizer for 100 epochs with a learning rate of 0.01 and a batch size of 8 \u2013 in this we again follow the con\ufb01guration reported by (Bordes et al., 2017) to be the best for bAbI Task 1. 2.5 Experiments We are here interested in: (1) how robust MemN2Ns are to the surface transformations in bAbI+ when trained on bAbI; (2) can MemN2Ns learn to interpret bAbI+ when they are in fact trained on similar data that actually contain the bAbI+ structures \u2013 i.e. when trained on bAbI+; and (3) if so, how much bAbI+ data is needed for this. While (1) is a question about generalisation properties of a model, (2) & (3) are about potential in principle and/or practical limitations of MemN2Ns to learn to interpret dialogues containing, e.g. self-corrections where utterances contain both the correct, and an incorrect (and subsequently repaired) slot value (e.g. \u201cfor four sorry \ufb01ve people\u201d). To answer (1) we therefore train the model on the bAbI dataset and test on bAbI+; and to answer (2) & (3) we train the model on the bAbI+ train set and test it on the bAbI+ test set. Furthermore, in order to explore the impact of the amount of training data on the model\u2019s performance, we perform the latter experiment with varying train set size, as well as varying the hyperparameters, embedding size & number of hops. \ftrain / test set con\ufb01guration train accuracy test accuracy bAbI / bAbI 100 100 bAbI / bAbI+ 100 28 bAbI+ / bAbI 67 99 bAbI+ / bAbI+ 72 53 Table 1: MemN2N API call accuracy (%) training bAbI+ dialogues memory hops embedding size train accuracy test accuracy 2000 2 128 72.5 57.5 5000 2 128 72.7 60.7 10000 2 128 72.8 65.8 50000 1 128 82.6 78.2 100000 1 64 83.3 80.5 Table 2: MemN2N API call accuracy (%) with extended training data The extended training data is obtained in the same way as the initial bAbI+ dataset: we go over the same original bAbI dialogues and keep randomly mixing in the incremental modi\ufb01cations. Performance Measure: Semantic Accuracy Self-corrections and restarts are especially problematic because processing them is potentially a non-monotonic operation involving deletion and replacement in the resulting semantic representations. To measure the model\u2019s e\ufb00ectiveness in processing such structures we therefore consider the semantic accuracy of the model de\ufb01ned as how accurately it predicts the \ufb01nal API calls \u2013 recall that the API calls contain all the values of the slots corresponding to the user\u2019s request expressed in the preceding dialogue. Hypotheses We predicted that (i) given the positional encoding of memory vectors in the MemN2N model and the attendant attention mechanisms, it would be able to learn to process bAbI+ dialogues given that it was trained on similar data, resulting in an insigni\ufb01cant drop in performance from bAbI to bAbI+ data; (ii) a lot more data would be needed to learn to process the bAbI+ structures; and (iii) if trained on bAbI data, there would be a very signi\ufb01cant drop in performance on bAbI+ with incorrect API calls predicted as a result of incorrect weightings and total lack of opportunity to learn the meaning of words such as \u201cno\u201d or \u201csorry\u201d which trigger the self-corrections and restarts. Finally, we also perform training on bAbI+ and testing on bAbI to see if the model is able to generalise from more complex back to the simpler data. 2.6 Results and Discussion 2.6.1 The original setup Table 1 shows how the MemN2N model performs in di\ufb00erent conditions. For this, we used identical hyperparameter settings to those of Bordes et al. (2017): 1 hop, 128 embedding size, 100 epochs, learning rate of 0.01, and batch size of 8. The train and test sets each contain 1000 dialogues, i.e. the entire corpus. First note that the \ufb01rst row shows identical results to those of Bordes et al. (2017): training on bAbI and testing on the bAbI test set results in 100% accuracy in API call prediction. It is therefore highly unlikely that the rest of the results reported here are due to implementational di\ufb00erences between our setup and that of Bordes et al. (2017). As we had predicted, the model performs very badly when trained on bAbI and tested on bAbI+ showing very poor robustness to the variations we had introduced, and indicating signi\ufb01cant over\ufb01tting to the original data. When the model is trained on bAbI+ data, its performance on the bAbI+ API calls nearly doubles, showing that the model can potentially learn to process the bAbI+ test set given enough data \u2013 see below. Nevertheless, it remains very low at about 53% making any system created in this fashion unusable in the face of spontaneous dialogue data. We also note that the accuracy on the train set itself is now lower. This suggests that bAbI+ \fis a dataset signi\ufb01cantly harder to learn (or over\ufb01t to), and given the extreme homogeneity of the original bAbI train and test sets, over\ufb01tting might be one reason for the model\u2019s outstanding results. However, training on bAbI+ and testing on bAbI shows that our assumption about the model\u2019s ability to generalize to more simple data appears to be correct. 2.6.2 How much data is enough data? Table 2 shows how MemN2N performs on the same initial, \ufb01xed bAbI+ test set, when trained on progressively more data and up to 100000 bAbI+ dialogues. As MemN2N \u2019s performance on bigger data highly depends on the model\u2019s hyperparameters, in this experiment we perform a grid search over the number of memory hops (1, 2, 3), and the embeddings dimensionality (32, 64, 128) for each train set size independently \u2013 everything else is \ufb01xed as in the previous experiment. The table only shows the best performing hyperparameter con\ufb01guration for each of the train set sizes. The results con\ufb01rm hypothesis (ii) above, i.e. that MemN2Ns are in principle able to learn to process the incremental dialogue phenomena in bAbI+ but that they require tens of thousands of training instances for this: even with 100000 dialogues, the semantic accuracy on the original test set stands at 80.5%. These experiments shed signi\ufb01cant light on the currently ambiguous robustness results reported in the dialogue systems literature today. Speci\ufb01cally, they show that, from the point of view of dialogue system developers in the real world, learning to process natural spontaneous dialogue using MemN2Ns only in an end-to-end fashion may not be practical: in bAbI+, the dis\ufb02uent incremental phenomena were mixed in at will, thus a\ufb00ording access to arbitrarily large training sets; furthermore, the test set was synthetically constructed to follow the same pattern as in the train set; whereas real, natural, spontaneous dialogue data is not only very expensive to collect, but is bound to be more complex, with the closeness between train & test data very di\ufb03cult to control. A potential solution to this \u2018small data\u2019 problem is the use of computational dialogue models (such as e.g. (Ginzburg, 2012; Larsson, 2002; Poesio and Rieser, 2010; Eshghi et al., 2015)) with studied empirical foundation as a form of bias or prior in subsequent learning, thus exploiting the linguistic knowledge inherent in such models. Even if they are not used directly, they can be used to inform the architecture of particular machine learning methods, especially deep learning architectures and techniques, with a view to more modularity in such architectures, with general language processing modules that are transferable from one domain to another, much like a NL grammar. 3 Testing an incremental, semantic grammar on bAbI & bAbI+ In this section, we \ufb01rst quickly introduce an incremental, semantic parser for dialogue processing \u2013 DyLan (Eshghi et al., 2011; Eshghi, 2015; Purver et al., 2011) \u2013 based around the Dynamic Syntax and Type Theory with Records framework (Kempson et al., 2001; Cann et al., 2005; Eshghi et al., 2012; Cooper, 2005; Cooper, 2012), which has been used recently in combination with Reinforcement Learning for automatically inducing fully incremental dialogue systems from small amounts of raw, unannotated dialogue data (Eshghi and Lemon, 2014; Kalatzis et al., 2016), showing remarkable generalisation properties (Eshghi et al., 2017b; Eshghi et al., 2017a). We then go on to perform the same experiments on semantic accuracy as we did above with MemN2Ns using this linguistically informed model instead. 3.1 DyLan4: parser for Dynamic Syntax DyLan (Eshghi et al., 2011; Eshghi, 2015) is the parser/implementation for Dynamic Syntax (DS), an action-based, word-by-word incremental, semantic grammar formalism (Kempson et al., 2001; Cann et al., 2005), especially suited to the highly fragmentary and contextual nature of dialogue. In DS, words are conditional actions \u2013 semantic updates; and dialogue is modelled as the interactive and incremental construction of contextual and semantic representations (Eshghi et al., 2015) \u2013 see Fig. 2 which shows how semantic representations are constructed incrementally as Record Types of Type Theory with Records (TTR) (Cooper, 2005; Cooper, 2012). The contextual representations a\ufb00orded by DS are of the \ufb01ne-grained semantic content that is jointly negotiated/agreed upon by the interlocutors, as a result of processing questions and answers, clari\ufb01cation interaction, acceptances, self-/other-corrections, restarts, and other characteristic incremental phenomena in dialogue \u2013 see Fig. 3 for a sketch of how self4DyLan is derived from \u201cDynamics of Language\u201d \f\" event : es p1=today(event) : t # 7\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 event=arrive : es p1=today(event) : t p2=pres(event) : t x=robin : e p3=subj(event,x) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb 7\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 event=arrive : es p1=today(event) : t p2=pres(event) : t x=robin : e p3=subj(event,x) : t x1 : e p3=from(event,x1) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb 7\u2192 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 event=arrive : es p1=today(event) : t p2=pres(event) : t x=robin : e p=subj(event,x) : t x1=S weden : e p3= from(event,x1) : t \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u201cA: Today\u201d 7\u2192 \u201c..Robin arrives\u201d 7\u2192 \u201cB: from?\u201d 7\u2192 \u201cA: Sweden\u201d Figure 2: Incremental parsing with DyLan Figure 3: Processing self-corrections & restarts with DyLan: \u201cA: any preference? B: with italian yeah sorry with spanish cuisine\u201d corrections and restarts are processed via a backtrack and search mechanism over the parse search graph. The nodes in this graph are (partial) semantic trees, and the edges correspond to words uttered by particular speakers. Context of a partial tree in DS is the path back to root on this parse search graph (see Hough (2015; Hough and Purver (2014; Eshghi et al. (2015) for details of the model). The upshot of this is that using DS, one can not only track the semantic content of some current turn as it is being constructed (parsed or generated) word-by-word, but also the context of the conversation as whole, with the latter also encoding the grounded/agreed content of the conversation (see Eshghi et al. (2015); Purver et al. (2010) for details). Crucially for (Eshghi et al., 2017b)\u2019s model, the inherent incrementality of DS together with the word-level, as well as cross-turn, parsing constraints it provides, enables the wordby-word exploration of the space of grammatical dialogues, thus lending itself very well to Reinforcement Learning (Kalatzis et al., 2016; Eshghi et al., 2017a). 3.2 Parsing bAbI and bAbI+ dialogues with DS The Dynamic Syntax (DS) grammar is learnable from data (Eshghi et al., 2013a; Eshghi et al., 2013b). But since the lexicon was induced from a corpus of child-directed utterances in this prior work, there were some constructions as well as individual words that it did not include5. One of the 5in the near future we will use the learning method in Eshghi et al. (2013a) to induce DS grammars from larger seauthors therefore extended this induced grammar manually to cover the bAbI dataset, which, despite not being very diverse, contains a wide range of complex grammatical constructions, such as long sequences of prepositional phrases, adjuncts, short answers to yes/no and wh-questions, appositions of NPs, causative verbs etc \u2013 and all of this within and across dialogue turns/speakers. Using DyLan we parsed all dialogues in the bAbI train and test sets, as well as on the bAbI+ corpus word-by-word, including both user and system utterances, in context. The grammar parses 100% of the dialogues, i.e. it does not fail on any word in any of the dialogues. 3.3 Semantic Accuracy of DyLan Merely parsing all dialogues in the bAbI and bAbI+ datasets doesn\u2019t mean that the semantic representations compiled for the dialogues were in fact correct. To measure the semantic accuracy of the parser, we used, as before, the API call annotations at the end of bAbI and bAbI+ task 1 dialogues. This was done programmatically by checking that the correct slot values \u2013 those in the API call annotations \u2013 were in fact present in the semantic representations produced by the parser for each dialogue (see Fig. 2 for example semantic representations). We further checked that there is no other incorrect slot value present in these representations. The results showed that the parser has 100% semantic corpora such as the Groningen Meaning Bank, leading to much more wide-coverage lexicons than the present one \fmantic accuracy on both bAbI and bAbI+. This result is not surprising, given that Dynamic Syntax is a general model of incremental language processing, including phenomena such as self-corrections & restarts (see (Hough, 2015) for details of the model)6. It is worth noting that even though new lexical entries would have to be added for each new dataset/domain, given the parts-of-speech of the words in any given dataset, this can mostly be done automatically. Moreover, this result further reinforces the point made by Eshghi et al. (2017a) about the generalisation power of the Dynamic Syntax grammar: the grammar automatically generalises to a combinatorially large number of dialogue variations with various phenomena such as self-corrections, hesitations, restarts, clari\ufb01cation interaction, continuations, question-answer pairs etc. without having actually observed these in any of the seed/training dialogues. 4 Conclusion and ongoing work Our main advance is in exploring incremental processing for wider coverage of more natural everyday dialogue (e.g. containing self-corrections). Our experiments show that a state-of-theart model for end-to-end goal-oriented dialogue, MemN2N , lacks the ability to generalise to such phenomena, and performs poorly when confronted with natural spontaneous dialogue data. Our experiments further show that although this particular model is in principle able to learn to process incremental dialogue phenomena, it requires an impractically large amount of data to do so. The results in this paper therefore shed signi\ufb01cant light on the currently ambiguous robustness results reported for end-to-end systems. We also assessed the performance of the DyLan dialogue parser on bAbI and bAbI+ which showed 100% parsing and semantic accuracy, highlighting the generalisation power of models that are linguistically informed, and theoretically grounded as compared with pure machine learning methods that aim to learn to process dialogue bottom up from textual data alone, without any linguistic 6A helpful reviewer points out that the DyLan setup is a carefully tuned rule-based system, thus rendering these results trivial. But we note that the results here are not due to ad-hoc constructions of rules/lexicons, but due to the generality of the grammar model, and its attendant incremental, left-to-right properties. For example, the ability to process self-corrections, restarts, etc. \u201ccomes for free\u201d, without the need to add or posit new machinery bias. These issues are explored further in (Eshghi et al., 2017a)."
+ }
+ ]
+ },
+ "edge_feat": {}
+ }
+}
\ No newline at end of file