ACL-OCL / Base_JSON /prefixH /json /hcinlp /2021.hcinlp-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:35:36.697226Z"
},
"title": "Embodied Multimodal Agents to Bridge the Understanding Gap",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Krishnaswamy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Colorado State University Fort Collins",
"location": {
"region": "CO",
"country": "USA"
}
},
"email": ""
},
{
"first": "Nada",
"middle": [],
"last": "Alalyani",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Colorado State University Fort Collins",
"location": {
"region": "CO",
"country": "USA"
}
},
"email": "n.alalyani@colostate.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we argue that embodied multimodal agents, i.e., avatars, can play an important role in moving natural language processing toward \"deep understanding.\" Fullyfeatured interactive agents, model encounters between two \"people,\" but a language-only agent has little environmental and situational awareness. Multimodal agents bring new opportunities for interpreting visuals, locational information, gestures, etc., which are more axes along which to communicate. We propose that multimodal agents, by facilitating an embodied form of human-computer interaction, provide additional structure that can be used to train models that move NLP systems closer to genuine \"understanding\" of grounded language, and we discuss ongoing studies using existing systems.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we argue that embodied multimodal agents, i.e., avatars, can play an important role in moving natural language processing toward \"deep understanding.\" Fullyfeatured interactive agents, model encounters between two \"people,\" but a language-only agent has little environmental and situational awareness. Multimodal agents bring new opportunities for interpreting visuals, locational information, gestures, etc., which are more axes along which to communicate. We propose that multimodal agents, by facilitating an embodied form of human-computer interaction, provide additional structure that can be used to train models that move NLP systems closer to genuine \"understanding\" of grounded language, and we discuss ongoing studies using existing systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As of the 2020s, high-profile NLP successes are being driven by large (and ever-growing) deep neural language models 1 . These models perform impressively according to common metrics on a variety of tasks such as text generation, question answering, summarization, machine translation, and more.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, deep examinations of large language models show that they may exploit artifacts or heuristics in the data (McCoy et al., 2019; Niven and Kao, 2019) . Similar observations have been made about computer vision systems (Zhu et al., 2016; Barbu et al., 2019) . The black-box nature of much NLP leads to the argument that they are insufficient at demonstrating understanding of communicative intent and also cannot explain why or when these failures occur. (Bender and Koller, 2020). 1 As of writing, the largest language model is the Switch Transformer by Google (Fedus et al., 2021) , a trillionparameter language model beating previous record-holder GPT-3 (Brown et al., 2020) The neurosymbolic approach to AI has long argued for structured representation (Garcez et al., 2015; Besold et al., 2017) , and similar arguments have also been made by deep learning luminaries e.g., Bengio (2017) . Introducing preexisting structure into NLP facilitates higher-order reasoning, but a hybrid approach also does so at larger scales than purely symbolic systems, by using flexible deep-learning representations as input channels.",
"cite_spans": [
{
"start": 115,
"end": 135,
"text": "(McCoy et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 136,
"end": 156,
"text": "Niven and Kao, 2019)",
"ref_id": "BIBREF22"
},
{
"start": 225,
"end": 243,
"text": "(Zhu et al., 2016;",
"ref_id": "BIBREF30"
},
{
"start": 244,
"end": 263,
"text": "Barbu et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 488,
"end": 489,
"text": "1",
"ref_id": null
},
{
"start": 568,
"end": 588,
"text": "(Fedus et al., 2021)",
"ref_id": "BIBREF12"
},
{
"start": 657,
"end": 683,
"text": "GPT-3 (Brown et al., 2020)",
"ref_id": null
},
{
"start": 763,
"end": 784,
"text": "(Garcez et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 785,
"end": 805,
"text": "Besold et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 884,
"end": 897,
"text": "Bengio (2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multimodal NLP systems, particularly those with embodied agents, model encounters between two \"people.\" A fully-featured interaction requires an encoding of structures that make language interpretable and \"deeply understandable.\" This makes embodied multimodal interaction a uniquely useful tool to examine what NLP models in use meaningfully learn and \"understand.\". If one mode of expression (e.g., gesture) is insufficiently communicative, another (e.g., language) can be used to examine where it went wrong. Each additional modality provides an avenue through which to validate models of other modalities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we review common components of embodied multimodal agent systems and their contributions to \"deep understanding.\" We present ongoing experiments in agents who exhibit grounded understanding, using complex multimodal referring expressions as an example task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As AI systems become more integrated with everyday life, users at times harbor misapprehensions that they behave like humans. This dates as far back as ELIZA (Weizenbaum, 1966) , whose users were convinced that ELIZA \"understood\" them despite Weizenbaum's insistence otherwise. The well-known SHRDLU displayed some situational understanding, but SHRDLU's deterministic perception was not sensor-driven (Winograd, 1972) . Quek et al. (2002) and Dumas et al. (2009) , among many others, define multimodal interaction as interactions with multiple distinct input channels, e.g., spoken language and gestures, that complement and reinforce each other. This relates to \"communicative intent\" in that if one channel is ambiguous, the other(s) may clarify or add to the meaning. Multimodal interactions may increase understanding by making it easier to retrieve a communicative intent from a combination of inputs. This makes it critical in computer systems that display understanding. Bolt's \"Put-that-there\" (1980) anticipated some critical issues, including the use of deixis for disambiguation. The subsequent community that evolved around multimodal integration (e.g., Kennington et al. (2013) , Turk (2014)) gave rise to a number of embodied conversation agents (ECAs) (e.g., Cassel (2000)) deployed in different task domains, including education (Barmaki and Hughes, 2018) , negotiations (Mell et al., 2018) , and medical practitioner evaluation (Carnell et al., 2019) .",
"cite_spans": [
{
"start": 158,
"end": 176,
"text": "(Weizenbaum, 1966)",
"ref_id": "BIBREF28"
},
{
"start": 402,
"end": 418,
"text": "(Winograd, 1972)",
"ref_id": "BIBREF29"
},
{
"start": 421,
"end": 439,
"text": "Quek et al. (2002)",
"ref_id": "BIBREF25"
},
{
"start": 444,
"end": 463,
"text": "Dumas et al. (2009)",
"ref_id": "BIBREF11"
},
{
"start": 1167,
"end": 1191,
"text": "Kennington et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 1346,
"end": 1372,
"text": "(Barmaki and Hughes, 2018)",
"ref_id": "BIBREF1"
},
{
"start": 1388,
"end": 1407,
"text": "(Mell et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 1446,
"end": 1468,
"text": "(Carnell et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Conversational Systems",
"sec_num": "2"
},
{
"text": "Humans communicate contextually; they may gesture and refer to items in their co-perceived space. This co-perception and co-attention is \"central to determining the meanings of communicative acts [...] in shared events\" (Pustejovsky, 2018) . A communicative act, C a , can be modeled as a tuple of expressions from the modalities available to the agent, which convey complementary or redundant information. For example, if the modalities involved are Speech, Gesture, F acial expression, gaZe, and Action, then C a = S, G, F, Z, A , of which any may be null or empty. Information in one channel may be supplemented by another channel, and they may disambiguate each other if properly aligned. For instance, if interpreted from the human's point of view, C a = S = \"lef t\", G = [P oint g \u2227 Dir = RIGHT] (cf. Kendon (2004) , Lascarides and Stone (2009) ), this may signal a difference in the agents' relative frames of reference. This effect of embodiment is critical for deep understanding of situated language, vis-\u00e0-vis something as basic as directional terms.",
"cite_spans": [
{
"start": 220,
"end": 239,
"text": "(Pustejovsky, 2018)",
"ref_id": "BIBREF23"
},
{
"start": 807,
"end": 820,
"text": "Kendon (2004)",
"ref_id": "BIBREF15"
},
{
"start": 823,
"end": 850,
"text": "Lascarides and Stone (2009)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Conversational Systems",
"sec_num": "2"
},
{
"text": "An embodied agent can demonstrate aspects of their worldview, including how they ground and interpret utterances, gestures, or the consequences of actions, by acting themselves in their worlds. \"Embodied worlds\" may be virtual, physical, or mixed-reality. In the remainder of this paper we focus on virtual worlds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Conversational Systems",
"sec_num": "2"
},
{
"text": "To align various modalities in the state space accessed by the agent's dialogue system, we assume a \"common ground\" structure associated with a dialogue state. This state monad (Unger, 2011; Bekki and Masuko, 2014) : M\u03b1 = State \u2192 (\u03b1 \u00d7 State), corresponds to those computations that read and modify the state. M is a type constructor that consumes a state and returns a pair: value, modif ied state . A system operating under the aforementioned assumptions is \"Diana\" (McNeely-White et al., 2019). Diana's semantic knowledge of objects and actions is based on the VoxML modeling language (Pustejovsky and Krishnaswamy, 2016) , and she asynchronously interprets spoken language and gesture to collaborate on construction tasks with humans. The human instructs Diana by deictically or linguistically referencing objects and indicating what to do with them multimodally. Diana's responses and actions make clear whether she understands what the user intended or not: a clear extraction of communicative intent from utterance, and a measure of \"deep understanding.\"",
"cite_spans": [
{
"start": 177,
"end": 190,
"text": "(Unger, 2011;",
"ref_id": "BIBREF27"
},
{
"start": 191,
"end": 214,
"text": "Bekki and Masuko, 2014)",
"ref_id": "BIBREF2"
},
{
"start": 604,
"end": 623,
"text": "Krishnaswamy, 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Conversational Systems",
"sec_num": "2"
},
{
"text": "However, Diana's capabilities are not fully symmetrical. The human may speak verbosely, but Diana's responses are brief: \"OK,\" or the occasional disambiguatory question. To increase Diana's capacity for deep understanding, and symmetric retrieval of intent from utterance on the part of both Diana and the human, we are conducting experiments to fully develop the capabilities afforded by Diana's multimodal embodiment. These are outlined subsequently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Conversational Systems",
"sec_num": "2"
},
{
"text": "Referring expressions (REs) provide a defined and evaluable case study in the ability of an NLP system to extract communicative intent from an utterance. Either the agent correctly retrieves the object the human referenced or it does not; either the human can correct misunderstanding or they cannot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ongoing Experiments",
"sec_num": "3"
},
{
"text": "Referring expressions exploit information about both object characteristics and locations. Linguistic referencing strategies can use high-level abstractions to distinguish an object in a given location from similar ones elsewhere, yet the location described may still be difficult to ground or interpret. When interacting person-to-person, humans deploy a variety of strategies to clearly indicate objects and locations in a discourse, including mixing and matching modalities on the fly and switching strate-gies to optimize both economy and clarity. Therefore, in an ongoing study to replicate this same capacity in interactive agents, we are exploring how to generate symmetrically-descriptive references of distinct objects in context. presented the EMRE (Embodied Multimodal Referring Expressions) dataset. It contains 1,500 videos of a version of the Diana agent referencing multiple objects in sequence with a mix of language and gestures. The authors found that human evaluators found multimodal referring strategies most natural and preferred more descriptive language. However, they also found that the data gathered was not enough to train an effective generation model, so we build on this strategy to augment the available data to a sufficient level. We are deploying a web-based version of the Diana system, where participants are presented with a scene populated with randomly-placed objects, including objects that are identical in nominal attributes like color, as in the EMRE dataset. The human participant will be prompted analogously to what is shown in Fig. 1 . The experimental system indicates one object of focus and Diana prompts the user to generate a referring expression with a question, e.g., \"Which object should I pick up?\" We then log information about how the human participant responds to the question, and about the scene (which act as contextual parameters). These include including the target object, distance from the agent, object coordinates, relations in the scene, previously referred-to objects; and about how the user responds, including complete utterance, modality(ies) used, attribute (identity) based description, demonstratives used, and relational de-scriptions used relative to other objects including those referred to at previous points in the dialogue.",
"cite_spans": [],
"ref_spans": [
{
"start": 1574,
"end": 1580,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ongoing Experiments",
"sec_num": "3"
},
{
"text": "Some of these parameters are identical to those in the EMRE dataset; others are included because we hypothesize that they will be useful to our proposed generation models ( \u00a73.1.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Referring Expression Generation",
"sec_num": "3.1"
},
{
"text": "The multimodal inputs our system captures from the user are speech, transcribed via automated speech recognition, and mouse-based deictic gesture as a proxy for live pointing (see more details in \u00a73.1.1). The modes of presentation available to Diana are: speech (generated via text-to-speech), pointing (via animated gesture), and visualized action (acting directly on virtual objects in the scene).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Referring Expression Generation",
"sec_num": "3.1"
},
{
"text": "Participants in this study will be given explicit instructions as to the nature of the task, including that they should speak to Diana and can use the mouse to point to objects, to construct the conceit that Diana is a peer, such that the participants treat her like an appropriately collaborative agent. Depending on the crowdsourcing platform, we will also be able to filter the participants we solicit to make sure they have access to appropriate equipment, like a high-quality computer microphone. Fig. 2 contains an example dialogue, which shows how Diana both extracts distinct objects from multimodal referring expressions and, when those objects do not match the human's intent, the human is able to correct her, also multimodally. We expect to deploy this study on a crowdsourcing platform like Prolific or Amazon Mechanical Turk, and aim to solicit interactions from approximately 250 workers using multimodal referring expressions while interacting with Diana in the task describe above. Each worker will view 10 scenes with different configurations in which to refer to up to 10 distinct target objects, resulting in a total of 25,000 samples. These multimodal referring ex-pressions are expected to fall into three categories; attributive REs, relational REs, and historical REs, as we describe in \u00a73.1.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 502,
"end": 508,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Multimodal Referring Expression Generation",
"sec_num": "3.1"
},
{
"text": "We anticipate taking 3 months to build the webbased version of Diana; 1 month to gather raw data from workers; 1 month to annotate the data; 2 months to perform statistical analysis on the data and investigate good model features; and 3-4 months to design, build, and train our proposed RE generation models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Referring Expression Generation",
"sec_num": "3.1"
},
{
"text": "For the avatar to respond appropriately, inputs need to be as clean as possible. In our multimodal use case, this includes the speech input, the pointing input, and the parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Quality Assurance",
"sec_num": "3.1.1"
},
{
"text": "We solicited voice recordings of 30 university graduate students reading scripts containing domain-specific vocabulary. These cover diverse vocal profiles, including accent and timbre, so speech models suitable for these voices should be suitable for a crowd-sourced study. We are using these to assess the quality of speech recognition (e.g., Google Cloud ASR) in the task domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Quality Assurance",
"sec_num": "3.1.1"
},
{
"text": "Our web-based Diana uses the mouse as a proxy for deixis, instead of CNN-based gesture recogntion. We want to avoid participants \"gaming the system\" by using accurate mouse deixis alone, so we build some variance into this deixis proxy. The purple circle shown in Fig. 1 fluctuates in size proportionally to the variance in pointing location of the aforementioned gesture recognizer. The location of mouse \"deixis\" is imprecise by design, forcing participants to also use language to properly describe the target object.",
"cite_spans": [],
"ref_spans": [
{
"start": 264,
"end": 270,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Input Quality Assurance",
"sec_num": "3.1.1"
},
{
"text": "Parsers under investigation are Stanford-CoreNLP and Google's SyntaxNet, and are being evaluated for the the syntactic dependencies they provide and the ability to extract entities and constituent relationships between them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Quality Assurance",
"sec_num": "3.1.1"
},
{
"text": "When an embodied interactive agent is a partner in a dialogue, this provides a layer of exposure not present in end-to-end systems. The agent's responses can be used to probe where in the pipeline errors occur, be it in the speech recognition, the parse, or the interpretation. As discussed in \u00a72, the existence of multiple modal channels allows investigation into which channel, e.g., the language, the gesture, the gaze/attention, etc. is the likely cause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Generation Models",
"sec_num": "3.1.2"
},
{
"text": "Assuming maximally correct inputs, as discussed in \u00a73.1, existing NLP technologies, combined with an embodied multimodal agent, creates a mechanism to examine how multiple modalities can combine to signal aspects of an input that the agent can understand, or extract intent from, and reproduce, or communicate its own intent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Generation Models",
"sec_num": "3.1.2"
},
{
"text": "The EMRE dataset contains referring expressions generated via a mix of stochastic sampling and slot filling. From our crowd-sourced study, we plan to extract three particular kinds of additional data to train our multimodal referring expression generation (MREG) models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Generation Models",
"sec_num": "3.1.2"
},
{
"text": "\u2022 Attributes used to denote objects (A-MRE);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Generation Models",
"sec_num": "3.1.2"
},
{
"text": "\u2022 Relations to other objects used to describe the target object (R-MRE);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Generation Models",
"sec_num": "3.1.2"
},
{
"text": "\u2022 Dialogue history invoked when describing objects (H-MRE), which involves modeling when previous introductions into the common ground lose relevancy. Attributes, relations, and actions in the history are encoded in VoxML, and serve as the symbolic inputs to our neurosymbolic architectures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Generation Models",
"sec_num": "3.1.2"
},
{
"text": "We intend our approaches to generate multimodal referring expressions in context that can blend modalities at runtime, for instance using demonstratives with gesture and distance of the target object relative to the agent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Generation Models",
"sec_num": "3.1.2"
},
{
"text": "Our proposed MREG models are centered around LSTMs (Hochreiter and Schmidhuber, 1997) for their ability to capture sequential information as occurs in a dialogue. MREG model outputs are likely to take the form M odality, U tterance, Location, Demonstratives , where M \u2208 [Gesture, Language, Ensemble], U is a decoded sentence embedding, L is the location the gesture grounds to, and D \u2208 [this, that]. Depending on the value of M , some of the other parameters may be empty by default. Fig. 3 shows schematics of the MREG architectures we are currently exploring.",
"cite_spans": [
{
"start": 51,
"end": 85,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 484,
"end": 490,
"text": "Fig. 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Proposed Generation Models",
"sec_num": "3.1.2"
},
{
"text": "A-MRE Object attributes are predicted by training an LSTM (A-LSTM) over the attributive terms participants use to describe objects (e.g., constituents tagged as nmod in a parse), tracking how they vary terms for the same object over time. relative to objects around it. E.g., in Fig. 3 , if the target is \"green block,\" then the proximate objects (orange and red blocks) are used to create binary relation pairs with them and the target (e.g., lef t(O, G) \u2194 right(G, O)). These inputs are then fed to the R-LSTM trained over the R-MRE data with a query consisting of the target, which outputs relational descriptors of the target.",
"cite_spans": [],
"ref_spans": [
{
"start": 279,
"end": 285,
"text": "Fig. 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Proposed Generation Models",
"sec_num": "3.1.2"
},
{
"text": "H-MRE Instead of features of a single configuration, the H-LSTM is trained over the H-MRE data, or sequences of configurations and the moves the agent makes that led to each of them. The job of the H-LSTM is to encode the features of this list and return the decoded multimodal RE expressions compatible with the given list of configurations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Generation Models",
"sec_num": "3.1.2"
},
{
"text": "Clark (1996) casts language as a joint activity. We propose that introducing the interactive element into natural language technology enables a number of opportunities to build systems that go beyond processing language to understanding it. We focus our experiments on embodied systems because they create a conceit of interacting with another \"person.\" If these systems can interact in a way that suggests understanding of their interlocutor's intents, they will have demonstrated a step toward true computational natural language understand-ing. We are pursuing experiments in multimodal referring expression generation as an illustrative use case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Objectnet: A largescale bias-controlled dataset for pushing the limits of object recognition models",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Barbu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mayo",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Alverio",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Gutfreund",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Tenenbaum",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "Katz",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "9448--9458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenen- baum, and Boris Katz. 2019. Objectnet: A large- scale bias-controlled dataset for pushing the limits of object recognition models. In Advances in Neural Information Processing Systems, pages 9448-9458.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Embodiment analytics of practicing teachers in a virtual immersive environment",
"authors": [
{
"first": "Roghayeh",
"middle": [],
"last": "Barmaki",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"E"
],
"last": "Hughes",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Computer Assisted Learning",
"volume": "34",
"issue": "4",
"pages": "387--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roghayeh Barmaki and Charles E Hughes. 2018. Em- bodiment analytics of practicing teachers in a virtual immersive environment. Journal of Computer As- sisted Learning, 34(4):387-396.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Meta-lambda calculus and linguistic monads",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Bekki",
"suffix": ""
},
{
"first": "Moe",
"middle": [],
"last": "Masuko",
"suffix": ""
}
],
"year": 2014,
"venue": "Formal Approaches to Semantics and Pragmatics",
"volume": "",
"issue": "",
"pages": "31--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daisuke Bekki and Moe Masuko. 2014. Meta-lambda calculus and linguistic monads. In Formal Ap- proaches to Semantics and Pragmatics, pages 31-64. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Climbing towards nlu: On meaning, form, and understanding in the age of data",
"authors": [
{
"first": "M",
"middle": [],
"last": "Emily",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily M Bender and Alexander Koller. 2020. Climb- ing towards nlu: On meaning, form, and understand- ing in the age of data. In Proc. of ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Neuralsymbolic learning and reasoning: A survey and interpretation",
"authors": [
{
"first": "Artur D'avila",
"middle": [],
"last": "Tarek R Besold",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Garcez",
"suffix": ""
},
{
"first": "Howard",
"middle": [],
"last": "Bader",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Domingos",
"suffix": ""
},
{
"first": "Kai-Uwe",
"middle": [],
"last": "Hitzler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "K\u00fchnberger",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Luis",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lamb",
"suffix": ""
},
{
"first": "Priscila",
"middle": [],
"last": "Lowd",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Machado Vieira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lima",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.03902"
]
},
"num": null,
"urls": [],
"raw_text": "Tarek R Besold, Artur d'Avila Garcez, Sebastian Bader, Howard Bowman, Pedro Domingos, Pascal Hitzler, Kai-Uwe K\u00fchnberger, Luis C Lamb, Daniel Lowd, Priscila Machado Vieira Lima, et al. 2017. Neural- symbolic learning and reasoning: A survey and in- terpretation. arXiv preprint arXiv:1711.03902.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Put-that-there",
"authors": [
{
"first": "A",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bolt",
"suffix": ""
}
],
"year": 1980,
"venue": "Voice and gesture at the graphics interface",
"volume": "14",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard A Bolt. 1980. \"Put-that-there\": Voice and gesture at the graphics interface, volume 14. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Language models are few-shot learners",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Tom B Brown",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Askell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.14165"
]
},
"num": null,
"urls": [],
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Predicting student success in communication skills learning scenarios with virtual humans",
"authors": [
{
"first": "Stephanie",
"middle": [],
"last": "Carnell",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Lok",
"suffix": ""
},
{
"first": "Melva",
"middle": [
"T"
],
"last": "James",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"K"
],
"last": "Su",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 9th International Conference on Learning Analytics & Knowledge",
"volume": "",
"issue": "",
"pages": "436--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephanie Carnell, Benjamin Lok, Melva T James, and Jonathan K Su. 2019. Predicting student success in communication skills learning scenarios with vir- tual humans. In Proceedings of the 9th Interna- tional Conference on Learning Analytics & Knowl- edge, pages 436-440.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Embodied conversational agents",
"authors": [
{
"first": "Justine",
"middle": [],
"last": "Cassell",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justine Cassell. 2000. Embodied conversational agents. MIT press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Using language",
"authors": [
{
"first": "H",
"middle": [],
"last": "Herbert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert H Clark. 1996. Using language. Cambridge university press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multimodal interfaces: A survey of principles, models and frameworks. Human machine interaction",
"authors": [
{
"first": "Bruno",
"middle": [],
"last": "Dumas",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Lalanne",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Oviatt",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "3--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruno Dumas, Denis Lalanne, and Sharon Oviatt. 2009. Multimodal interfaces: A survey of principles, mod- els and frameworks. Human machine interaction, pages 3-26.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity",
"authors": [
{
"first": "William",
"middle": [],
"last": "Fedus",
"suffix": ""
},
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Neural-symbolic learning and reasoning: contributions and challenges",
"authors": [
{
"first": "Artur",
"middle": [],
"last": "Garcez",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tarek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Besold",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Raedt",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "F\u00f6ldiak",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hitzler",
"suffix": ""
},
{
"first": "Kai-Uwe",
"middle": [],
"last": "Icard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "K\u00fchnberger",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Luis",
"suffix": ""
},
{
"first": "Risto",
"middle": [],
"last": "Lamb",
"suffix": ""
},
{
"first": "Daniel L",
"middle": [],
"last": "Miikkulainen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Silver",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artur Garcez, Tarek R Besold, L d Raedt, Peter F\u00f6ldiak, Pascal Hitzler, Thomas Icard, Kai-Uwe K\u00fchnberger, Luis C Lamb, Risto Miikkulainen, and Daniel L Sil- ver. 2015. Neural-symbolic learning and reasoning: contributions and challenges.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Gesture: Visible action as utterance",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Kendon",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Kendon. 2004. Gesture: Visible action as utter- ance. Cambridge University Press.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Interpreting situated dialogue utterances: an update model that uses speech, gaze, and gesture information",
"authors": [
{
"first": "Casey",
"middle": [],
"last": "Kennington",
"suffix": ""
},
{
"first": "Spyridon",
"middle": [],
"last": "Kousidis",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Schlangen",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of SIGdial",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Casey Kennington, Spyridon Kousidis, and David Schlangen. 2013. Interpreting situated dialogue ut- terances: an update model that uses speech, gaze, and gesture information. Proceedings of SIGdial 2013.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Generating a novel dataset of multimodal referring expressions",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Krishnaswamy",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Conference on Computational Semantics-Short Papers",
"volume": "",
"issue": "",
"pages": "44--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Krishnaswamy and James Pustejovsky. 2019. Generating a novel dataset of multimodal referring expressions. In Proceedings of the 13th Inter- national Conference on Computational Semantics- Short Papers, pages 44-51.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A formal semantic analysis of gesture",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Stone",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Lascarides and Matthew Stone. 2009. A formal semantic analysis of gesture. Journal of Semantics, page ffp004.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3428--3448",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1334"
]
},
"num": null,
"urls": [],
"raw_text": "Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Useraware shared perception for embodied agents",
"authors": [
{
"first": "G",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcneely-White",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Francisco",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Ortega",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Beveridge",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bruce",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Draper",
"suffix": ""
},
{
"first": "Dhruva",
"middle": [],
"last": "Bangar",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Kyeongmin",
"middle": [],
"last": "Krishnaswamy",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Rim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ruiz",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE International Conference on Humanized Computing and Communication (HCC)",
"volume": "",
"issue": "",
"pages": "46--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David G McNeely-White, Francisco R Ortega, J Ross Beveridge, Bruce A Draper, Rahul Bangar, Dhruva Patil, James Pustejovsky, Nikhil Krishnaswamy, Kyeongmin Rim, Jaime Ruiz, et al. 2019. User- aware shared perception for embodied agents. In 2019 IEEE International Conference on Humanized Computing and Communication (HCC), pages 46- 51. IEEE.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Results of the first annual human-agent league of the automated negotiating agents competition",
"authors": [
{
"first": "Johnathan",
"middle": [],
"last": "Mell",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Gratch",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Baarslag",
"suffix": ""
},
{
"first": "Reyhan",
"middle": [],
"last": "Aydogan",
"suffix": ""
},
{
"first": "Catholijn M",
"middle": [],
"last": "Jonker",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 18th International Conference on Intelligent Virtual Agents",
"volume": "",
"issue": "",
"pages": "23--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johnathan Mell, Jonathan Gratch, Tim Baarslag, Rey- han Aydogan, and Catholijn M Jonker. 2018. Re- sults of the first annual human-agent league of the automated negotiating agents competition. In Pro- ceedings of the 18th International Conference on In- telligent Virtual Agents, pages 23-28.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Probing neural network comprehension of natural language arguments",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Niven",
"suffix": ""
},
{
"first": "Hung-Yu",
"middle": [],
"last": "Kao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4658--4664",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1459"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy Niven and Hung-Yu Kao. 2019. Probing neu- ral network comprehension of natural language ar- guments. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 4658-4664, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "From actions to events",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2018,
"venue": "teraction Studies",
"volume": "19",
"issue": "",
"pages": "289--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky. 2018. From actions to events. In- teraction Studies, 19(1-2):289-317.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "VoxML: A visualization modeling language",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Krishnaswamy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky and Nikhil Krishnaswamy. 2016. VoxML: A visualization modeling language. In Pro- ceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Asso- ciation (ELRA).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multimodal human discourse: gesture and speech",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Quek",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcneill",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Bryll",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Duncan",
"suffix": ""
},
{
"first": "Xin-Feng",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Cemil",
"middle": [],
"last": "Kirbas",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"E"
],
"last": "Mc-Cullough",
"suffix": ""
},
{
"first": "Rashid",
"middle": [],
"last": "Ansari",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Computer-Human Interaction (TOCHI)",
"volume": "9",
"issue": "3",
"pages": "171--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francis Quek, David McNeill, Robert Bryll, Susan Duncan, Xin-Feng Ma, Cemil Kirbas, Karl E Mc- Cullough, and Rashid Ansari. 2002. Multimodal human discourse: gesture and speech. ACM Trans- actions on Computer-Human Interaction (TOCHI), 9(3):171-193.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Multimodal interaction: A review",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Turk",
"suffix": ""
}
],
"year": 2014,
"venue": "Pattern Recognition Letters",
"volume": "36",
"issue": "",
"pages": "189--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Turk. 2014. Multimodal interaction: A re- view. Pattern Recognition Letters, 36:189-195.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Dynamic semantics as monadic computation",
"authors": [
{
"first": "Christina",
"middle": [],
"last": "Unger",
"suffix": ""
}
],
"year": 2011,
"venue": "JSAI international symposium on artificial intelligence",
"volume": "",
"issue": "",
"pages": "68--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christina Unger. 2011. Dynamic semantics as monadic computation. In JSAI international symposium on artificial intelligence, pages 68-81. Springer.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Eliza-a computer program for the study of natural language communication between man and machine",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Weizenbaum",
"suffix": ""
}
],
"year": 1966,
"venue": "Communications of the ACM",
"volume": "9",
"issue": "1",
"pages": "36--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Weizenbaum. 1966. Eliza-a computer pro- gram for the study of natural language communica- tion between man and machine. Communications of the ACM, 9(1):36-45.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Shrdlu: A system for dialog",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Winograd",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Winograd. 1972. Shrdlu: A system for dialog.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Object recognition with and without objects",
"authors": [
{
"first": "Zhuotun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Lingxi",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"L"
],
"last": "Yuille",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.06596"
]
},
"num": null,
"urls": [],
"raw_text": "Zhuotun Zhu, Lingxi Xie, and Alan L Yuille. 2016. Object recognition with and without objects. arXiv preprint arXiv:1611.06596.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "The pink circle indicates the target object. Diana asks \"Which object should I pick up?\" and waits for her partner to reference an object multimodally. The purple circle shows where the user is pointing.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Sample dialogue.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "The A-LSTM takes a query representing the target object and outputs a descriptor tuple M, U, L, D . R-MRE Relations are predicted with another LSTM (R-LSTM) that takes as input C, pairwise permutations of the target object's configuration",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "MREG architectures under exploration",
"num": null
}
}
}
}