ACL-OCL / Base_JSON /prefixC /json /C02 /C02-1035.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C02-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:20:52.203467Z"
},
"title": "Semantics-based Representation for Multimodal Interpretation in Conversational Systems",
"authors": [
{
"first": "Joyce",
"middle": [],
"last": "Chai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Unimodal Understanding Discourse Understanding Multimodal Understanding Other RIA Components Conversation History Conversation Segment",
"location": {}
},
"email": "jchai@us.ibm.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "To support context-based multimodal interpretation in conversational systems, we have developed a semantics-based representation to capture salient information from user inputs and the overall conversation. In particular, we present three unique characteristics: finegrained semantic models, flexible composition of feature structures, and consistent representation at multiple levels. This representation allows our system to use rich contexts to resolve ambiguities, infer unspecified information, and improve multimodal alignment. As a result, our system is able to enhance understanding of multimodal inputs including those abbreviated, imprecise, or complex ones.",
"pdf_parse": {
"paper_id": "C02-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "To support context-based multimodal interpretation in conversational systems, we have developed a semantics-based representation to capture salient information from user inputs and the overall conversation. In particular, we present three unique characteristics: finegrained semantic models, flexible composition of feature structures, and consistent representation at multiple levels. This representation allows our system to use rich contexts to resolve ambiguities, infer unspecified information, and improve multimodal alignment. As a result, our system is able to enhance understanding of multimodal inputs including those abbreviated, imprecise, or complex ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Inspired by earlier works on multimodal interfaces (e.g., Bolt, 1980; Cohen el al., 1996; Wahlster, 1991; Zancanaro et al., 1997) , we are currently building an intelligent infrastructure, called Responsive Information Architect (RIA) to aid users in their information-seeking process. Specifically, RIA engages users in a full-fledged multimodal conversation, where users can interact with RIA through multiple modalities (speech, text, and gesture), and RIA can act/react through automated multimedia generation (speech and graphics) (Zhou and Pan 2001) . Currently, RIA is embodied in a testbed, called Real Hunter TM , a real-estate application to help users find residential properties.",
"cite_spans": [
{
"start": 58,
"end": 69,
"text": "Bolt, 1980;",
"ref_id": "BIBREF0"
},
{
"start": 70,
"end": 89,
"text": "Cohen el al., 1996;",
"ref_id": null
},
{
"start": 90,
"end": 105,
"text": "Wahlster, 1991;",
"ref_id": null
},
{
"start": 106,
"end": 129,
"text": "Zancanaro et al., 1997)",
"ref_id": "BIBREF10"
},
{
"start": 536,
"end": 555,
"text": "(Zhou and Pan 2001)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a part of this effort, we are building a semantics-based multimodal interpretation framework MIND (Multimodal Interpretation for Natural Dialog) to identify meanings of user multimodal inputs. Traditional multimodal interpretation has been focused on integrating multimodal inputs together with limited consideration on the interaction context. In a conversation setting, user inputs could be abbreviated or imprecise. Only by combining multiple inputs together often cannot reach a full understanding. Therefore, MIND applies rich contexts (e.g., conversation context and domain context) to enhance multimodal interpretation. In support of this context-based approach, we have designed a semantics-based representation to capture salient information from user inputs and the overall conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we will first give a brief overview on multimodal interpretation in MIND. Then we will present our semantics-based representation and discuss its characteristics. Finally, we will describe the use of this representation in context-based multimodal interpretation and demonstrate that, with this representation, MIND is able to process a variety of user inputs including those ambiguous, abbreviated and complex ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To interpret user multimodal inputs, MIND takes three major processes as in Figure 1 : unimodal understanding, multimodal understanding, and discourse understanding. During unimodal understanding, MIND applies modality specific recognition and understanding components (e.g., a speech recognizer and a language interpreter) to identify meanings from each unimodal input, and captures those meanings in a representation called modality unit. During multimodal understanding, MIND combines semantic meanings of unimodal inputs (i.e., modality units), and uses contexts (e.g., conversation context and domain context) to form an overall understanding of user multimodal inputs. Such an overall understanding is then captured in a representation called conversation unit. Furthermore, MIND also identifies how an input relates to the overall conversation discourse through discourse understanding. In particular, MIND uses a representation called conversation segment to group together inputs that contribute to a same goal or sub-goal (Grosz and Sidner, 1986) . The result of discourse understanding is an evolving conversation history that reflects the overall progress of a conversation. Figure 2 shows a conversation fragment between a user and MIND. In the first user input U1, the deictic Figure 3 ) is ambiguous. It is not clear which object the user is pointing at: two houses nearby or the town of Irvington 1 . The third user input U3 by itself is incomplete since the purpose of the input is not specified. Furthermore, in U4, a single deictic gesture overlaps (in terms of time) with both \"this style\" and \"here\" from the speech input, it is hard to determine which one of those two references should be aligned and fused with the gesture. Finally, U5 is also complex since multiple objects (\"these two houses\") specified in the speech input need to be unified with a single deictic gesture.",
"cite_spans": [
{
"start": 1032,
"end": 1056,
"text": "(Grosz and Sidner, 1986)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 76,
"end": 84,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1187,
"end": 1195,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1291,
"end": 1299,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multimodal Interpretation",
"sec_num": "2"
},
{
"text": "This example shows that user multimodal inputs exhibit a wide range of varieties. They could be abbreviated, ambiguous or complex. Fusing inputs together often cannot reach a full understanding. To process these inputs, contexts are important.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Interpretation",
"sec_num": "2"
},
{
"text": "To support context-based multimodal interpretation, both representation of user inputs and representation of contexts are crucial. Currently, MIND uses three types of contexts: domain context, conversation context, and visual context. The domain context provides domain knowledge. The conversation context reflects the progress of the overall conversation. The visual context gives the detailed semantic and syntactic structures of visual objects and their relations. In this paper, we focus on representing user inputs and the conversation context. In particular, we discuss two aspects of representation: semantic models that capture salient information and structures that represent those semantic models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics-based Representation",
"sec_num": "3"
},
{
"text": "When two people participate in a conversation, their understanding of each other's purposes forms strong constraints on how the conversation is going to proceed. Especially, in a conversation centered around information seeking, understanding each other's information needs is crucial. Information needs can be characterized by two main aspects: motivation for seeking the information of interest and the information sought itself. Thus, MIND uses an intention model to capture the first aspect and an attention model to capture the second. Furthermore, since users can use different ways to specify their information of interest, MIND also uses a constraint model to capture different types of constraints that are important for information seeking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Models",
"sec_num": "3.1"
},
{
"text": "Intention describes the purpose of a message. In an information seeking environment, intention indicates the motivation or task related to the information of interest. An intention is modeled by three dimensions: Motivator indicating one of the three high level purposes: DataPresentation, DataAnalysis (e.g., comparison), and ExceptionHandling (e.g., clarification), Act specifying whether the input is a request or a reply, and Method indicating a specific task, e.g., Search (activating the relevant objects based on some criteria) or Lookup (evaluating/retrieving attributes of objects).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intention and Attention",
"sec_num": "3.1.1"
},
{
"text": "Attention relates to objects, relations that are salient at each point of a conversation. In an information seeking environment, it relates to the information sought. An attention model is characterized by six dimensions. Base indicates the semantic type of the information of interest (e.g., House, School, or City which are defined in our domain ontology). Topic specifies the granularity of the information of interest (e.g., Instance or Collection). Focus identifies the scope of the topic as to whether it is about a particular feature (i.e., SpecficAspect) or about all main features (i.e., MainAspect). Aspect provides specific features of the topic. Constraint describes constraints to be satisfied (described later). Content points to the actual data. The intention and attention models were derived based on preliminary studies of user information needs in seeking for residential properties. The details are described in (Chai et al., 2002) .",
"cite_spans": [
{
"start": 548,
"end": 562,
"text": "SpecficAspect)",
"ref_id": null
},
{
"start": 932,
"end": 951,
"text": "(Chai et al., 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intention and Attention",
"sec_num": "3.1.1"
},
{
"text": "For example, Figure 4 (a-b) shows the Intention and Attention identified from U1 speech and gesture input respectively. Intention in Figure 4 (a) indicates the user is requesting RIA (Act: Request) to present her some data (Motivator: DataPresentation) about attributes of 1 The generated display has multiple layers, where the house icons are on top of the Irvington town map. Thus this deictic gesture could either refer to the town of Irvington or houses. certain object(s) (Method: Lookup). The Attention indicates that the information of interest is about the price (Aspect: Price) of a certain object (Focus: Instance). The exact object is not known but is referred by a demonstrative \"this\" (in Constraint). Intention in Figure 4 (b) does not have any information since the high level purpose and the specific task cannot be identified from the gesture input. Furthermore, because of the ambiguity of the deictic gesture, three Attentions are identified. The first two Attentions are about house instances MLS0234765 and MLS0876542 (ID from Multiple Listing Service) and the third is about the town of Irvington.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 21,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 133,
"end": 141,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 728,
"end": 736,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Intention and Attention",
"sec_num": "3.1.1"
},
{
"text": "In an information seeking environment, based on the conversation context and the graphic display, users can refer to objects using different types of references, for example, through temporal or spatial relations, visual cues, or simply a deictic gesture. Furthermore, users can also search for objects using different constraints on data properties. Therefore, MIND models two major types of constraints: reference constraints and data constraints. Reference constraints characterize different types of references. Data constraints specify relations of data properties. A summary of our constraint model is shown in Figure 5 . Both reference constraints and data constraints are characterized by six dimensions. Category sub-categorizes constraints (described later). Manner indicates the specific way such a constraint is expressed. Aspect indicates a feature (features) this constraint is concerned about. Relation specifies the relation to be satisfied between the object of interest and other objects or values. Anchor provides a particular value, object or a reference point this constraint relates to. Number specifies cardinal numbers that are associated with the constraint.",
"cite_spans": [],
"ref_spans": [
{
"start": 617,
"end": 625,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Constraints",
"sec_num": "3.1.2"
},
{
"text": "Reference constraints are further categorized into four categories: Anaphora, Temporal, Visual, and Spatial. An anaphora reference can be expressed through pronouns such as \"it\" or \"them\" (Pronoun), demonstratives such as \"this\" or \"these\" (Demonstrative), here or there (Here/There), or proper names such as \"Lynhurst\" (ProperNoun). An example is shown in Figure 4 (a), where a demonstrative \"this\" (Manner: Demonstrative-This) is used in the utterance \"this house\" to refer to a single house object (Number: 1). Note that",
"cite_spans": [
{
"start": 409,
"end": 428,
"text": "Demonstrative-This)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 357,
"end": 365,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Reference Constraints",
"sec_num": null
},
{
"text": "Manner also keeps track of the specific type of the term. The subtle difference between terms can provide additional cues for resolving references. For example, the different use of \"this\" and \"that\" may indicate the recency of the referent in the user mental model of the discourse, or the closeness of the referent to the user's visual focus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Constraints",
"sec_num": null
},
{
"text": "Temporal references use temporal relations to refer to entities that occurred in the prior conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Constraints",
"sec_num": null
},
{
"text": "Manner is characterized by Relative and Absolute. Relative indicates a temporal relation with respect to a certain point in a conversation, and Absolute specifies a temporal relation regarding to the whole interaction. Relation indicates the temporal relations (e.g., Precede or Succeed) or ordinal relations (e.g., first). Anchor indicates a reference point. For example, as in Figure 6 (a), a Relative temporal constraint is used since \"the previous house\" refers to the house that precedes the current focus (Anchor: Current) in the conversation history. On the other hand, in the input: \"the first house you showed me,\" an Absolute temporal constraint is used since the user is interested in the first house shown to her at the beginning of the entire conversation.",
"cite_spans": [],
"ref_spans": [
{
"start": 379,
"end": 387,
"text": "Figure 6",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Reference Constraints",
"sec_num": null
},
{
"text": "Spatial references describe entities on the graphic display in terms of their spatial relations. Manner is again characterized by Absolute and Relative. Absolute indicates that entities are specified through orientations (e.g., left or right, captured by Relation) with respect to the whole display screen (Anchor: Display-Frame). In contrast, Relative specifies that entities are described through orientations with respect to a particular sub-frame (Anchor: FocusFrame, e.g., an area Aspect indicates the visual entity used (such as Color and Shape, which are defined in our domain ontology). Relation specifies the relation to be satisfied between the visual entity and some value. For example, constraint used in the input \"the green house\" is shown in Figure 6 (b). It is worth mentioning that during reference resolution, the color Green will be further mapped to the internal color encoding used by graphics generation.",
"cite_spans": [],
"ref_spans": [
{
"start": 757,
"end": 765,
"text": "Figure 6",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Reference Constraints",
"sec_num": null
},
{
"text": "Data constraints describe objects in terms of their actual data attributes (Category: Attributive). The Manner of Comparative indicates the constraint is about a comparative relation between (aspects of) the desired entities with other entities or values. Superlative indicates the constraint is about minimum or maximum requirement(s) for particular attribute(s). Fuzzy indicates a fuzzy description on the attributes (e.g., \"cheap house\"). For example, for the input \"houses under 300,000 dollars\" in Figure 7(a) , Manner is Comparative since the constraint is about a \"less than\" relationship (Relation: Less-Than) between the price (Aspect: Price) of the desired object(s) and a particular value (Anchor: \"300000 dollars\"). For the input \"3 largest houses\" in Figure 7 (b), Manner is Superlative since it is about the maximum (Relation: Max) requirement on the size of the houses (Aspect: Size).",
"cite_spans": [
{
"start": 607,
"end": 617,
"text": "Less-Than)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 503,
"end": 514,
"text": "Figure 7(a)",
"ref_id": "FIGREF5"
},
{
"start": 764,
"end": 772,
"text": "Figure 7",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Data Constraints",
"sec_num": null
},
{
"text": "The refined characterization of different constraints provides rich cues for MIND to identify objects of interest. In an information seeking environment, the objects sought can come from different sources. They could be entities that have been described earlier in the conversation, entities that are visible on the display, or entities that have never been mentioned or seen but exist in a database. Thus, fine-grained constraints allow MIND to determine where and how to find the information of interest. For example, temporal constraints help MIND navigate the conversation history by providing guidance on where to start, which direction to follow in the conversation history, and how many to look for.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Constraints",
"sec_num": null
},
{
"text": "Our fine-grained semantic models of intention, attention and constraints characterize user information needs and therefore enable the system to come up with an intelligent response. Furthermore, these models are domain independent and can be applied to any information seeking applications (for structured information).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Constraints",
"sec_num": null
},
{
"text": "Given the semantic models of intention, attention and constraints, MIND represents those models using a combination of feature structures (Carpenter, 1992) . This representation is inspired by the earlier works (Johnston et al., 1997; Johnston, 1998) and offers a flexibility to accommodate complex inputs. Specifically, MIND represents intention, attention and constraints identified from user inputs as a result of both unimodal understanding and multimodal understanding.",
"cite_spans": [
{
"start": 138,
"end": 155,
"text": "(Carpenter, 1992)",
"ref_id": "BIBREF1"
},
{
"start": 211,
"end": 234,
"text": "(Johnston et al., 1997;",
"ref_id": "BIBREF6"
},
{
"start": 235,
"end": 250,
"text": "Johnston, 1998)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representing User Inputs",
"sec_num": "3.1.3"
},
{
"text": "During unimodal understanding, MIND applies a decision tree based semantic parser on natural language inputs (Jelinek et al., 1994) to identify salient information. For the gesture input, MIND applies a simple geometry-based recognizer. As a result, information from each unimodal input is represented in a modality unit. We have seen several modality units (in Figure 4, Figure 6 , and Figure 7) , where intention, attention and constraints are represented in feature structures. Note that only features that can be instantiated by information from the user input are included in the feature structure. For example, since the exact object cannot be identified from U1 speech input, the Content feature is not included in its Attention structure (Figure 4a ). In addition to intention, attention and constraints, a modality unit also keeps a time stamp that indicates when a particular input takes place. This time information is used for multimodal alignment which we do not discuss here.",
"cite_spans": [
{
"start": 109,
"end": 131,
"text": "(Jelinek et al., 1994)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 362,
"end": 380,
"text": "Figure 4, Figure 6",
"ref_id": "FIGREF2"
},
{
"start": 387,
"end": 396,
"text": "Figure 7)",
"ref_id": "FIGREF5"
},
{
"start": 746,
"end": 756,
"text": "(Figure 4a",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Representing User Inputs",
"sec_num": "3.1.3"
},
{
"text": "Depending on the complexity of user inputs, the representation can be composed by a flexible combi- For example, U4 in Figure 2 is a complex input, where the speech input \"what about houses with this style around here\" consists of multiple objects with different relations. The modality unit created for U4 speech input is shown in Figure 8(a) . The Attention feature structure (A1) contains two attributive constraints indicating that the objects of interest are a collection of houses that satisfy two attributive constraints. The first constraint is about the style (Aspect: Style), and the second is about the location. Both of these constraints are related to other objects (Manner: Comparative), which are represented by Attention structures A2 and A3 through Anchor respectively. A2 indicates an unknown object that is referred by a Demonstrative reference constraint (this style), and A3 indicates a geographic location object referred by HERE. Since these two references are overlapped with a single deictic gesture, it is hard to decide which one should be unified with the gesture input. We will show in Section 4.3 that the fine-grained representation in Figure 8 (a) allows MIND to use contexts to resolve these two references and improve alignment.",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 127,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 332,
"end": 343,
"text": "Figure 8(a)",
"ref_id": null
},
{
"start": 1167,
"end": 1175,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Representing User Inputs",
"sec_num": "3.1.3"
},
{
"text": "During multimodal understanding, MIND combines information from modality units together and generates a conversation unit that represents the overall meaning of user multimodal inputs. A conversation unit also has the same type of intention and attention feature structures, as well as the feature structure for data constraints. Since references are resolved during the multimodal understanding process, the reference constraints are no longer present in conversation units. For example, once two references in Figure 8 (a) are resolved during multimodal understanding (details are described in Section 4.3), and MIND identifies \"this style\" is \"Victorian\" and \"here\" is \"White Plains\", it creates a conversation unit representing the overall meanings of this input in Figure 8(b) .",
"cite_spans": [],
"ref_spans": [
{
"start": 512,
"end": 520,
"text": "Figure 8",
"ref_id": null
},
{
"start": 770,
"end": 781,
"text": "Figure 8(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Representing User Inputs",
"sec_num": "3.1.3"
},
{
"text": "MIND uses a conversation history to represent the conversation context based on the goals or sub-goals of user inputs and RIA outputs. For example, in the conversation fragment mentioned earlier (Figure 2 ), the first user input (U1) initiates a goal of looking up the price of a particular house. Due to the ambiguous gesture input, in the next turn, RIA (R2) initiates a sub-goal of disambiguating the house of interest. This sub-goal contributes to the goal initiated by U1. Once the user replies with the house of interest (U2), the sub-goal is fulfilled. Then RIA gives the price information (R2), and the goal initiated by U1 is accompolished. To reflect this progress, our conversation history is a hierarchical structure which consists of conversation segments and conversation units (in Figure 9 ). As mentioned earlier, a conversation unit records user (rectangle U1, U2) or RIA (rectangle R1, R2) overall meanings at a single turn in the conversation. These units can be grouped together to form a conversation segment (oval DS1, DS2) based on their goals and sub-goals. Furthermore, a conversation segment contains not only intention and attention, but also other information such as the conversation initiating participant (Initiator). In addition to conversation segments and conversation units, a conversation history also maintains different relations between segments and between units. Details can be found in (Chai et al., 2002) .",
"cite_spans": [
{
"start": 874,
"end": 881,
"text": "U1, U2)",
"ref_id": null
},
{
"start": 1428,
"end": 1447,
"text": "(Chai et al., 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 195,
"end": 204,
"text": "(Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 796,
"end": 804,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Representing Conversation Context",
"sec_num": "3.2"
},
{
"text": "Another main characteristic of our representation is the consistent representation of intention and attention across different levels. Just like modality units and conversation units, conversation segments also consist of the same type of intention and attention feature structures (as shown in Figure 9 ). This consistent representation not only supports unification based multimodal fusion, but also enables contextbased inference to enhance interpretation (described later).",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 303,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Representing Conversation Context",
"sec_num": "3.2"
},
{
"text": "We have described our semantics-based representation and presented three characteristics: fine-grained semantic models, flexible composition, and consistent representation. Next we will show that how this representation is used effectively in the multimodal interpretation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Conversation Context",
"sec_num": "3.2"
},
{
"text": "As mentioned earlier, multimodal interpretation in MIND consists of three processes: unimodal understanding, multimodal understanding and discourse understanding. Here we focus on multimodal understanding. The key difference between MIND and earlier works is the use of rich contexts to improve understanding. Specifically, multimodal understanding consists of two sub-processes: multimodal fusion and context-based inference. Multimodal fusion fuses intention and attention structures (from modality units) for unimodal inputs and forms a combined representation. Context-based inference uses rich con- Figure 9 . A fragment of a conversation history texts to improve interpretation by resolving ambiguities, deriving unspecified information, and improving alignment.",
"cite_spans": [],
"ref_spans": [
{
"start": 604,
"end": 612,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Use of Representation in Multimodal Interpretation",
"sec_num": "4"
},
{
"text": "User inputs could be ambiguous. For example, in U1, the deictic gesture is not directly on a particular object. Fusing intention and attention structures from each individual inputs presents some ambiguities. For example, in Figure 4(b) , there are three Attention structures for U1 gesture input. Each of them can be unified with the Attention structure from U1 speech input (in Figure 4a) . The result of fusion is shown in Figure 10(a) . Since the reference constraint in the speech input (Number: 1 in Figure 4a ) indicates that only one attention structure is allowed, MIND uses contexts to eliminate inconsistent structures. In this case, A3 in Figure 10 (a) indicates the information of interest is about the price of the city Irvington. Based on the domain knowledge that the city object cannot have the price feature, A3 is filtered out. As a result, both A1 and A2 are potential interpretation. Therefore, the Content in those structures are combined using a disjunctive relation as in Figure 10 (b). Based on this revised conversation unit, RIA is able to arrange the follow-up question to further disambiguate the house of interest (R2 in Figure 2 ). This example shows that, modeling semantic information by fine-grained dimensions supports the use of domain knowledge in context-based inference, and can therefore resolve some ambiguities.",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 236,
"text": "Figure 4(b)",
"ref_id": "FIGREF2"
},
{
"start": 380,
"end": 390,
"text": "Figure 4a)",
"ref_id": "FIGREF2"
},
{
"start": 426,
"end": 438,
"text": "Figure 10(a)",
"ref_id": null
},
{
"start": 506,
"end": 515,
"text": "Figure 4a",
"ref_id": "FIGREF2"
},
{
"start": 651,
"end": 660,
"text": "Figure 10",
"ref_id": null
},
{
"start": 996,
"end": 1005,
"text": "Figure 10",
"ref_id": null
},
{
"start": 1151,
"end": 1159,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Resolving Ambiguities",
"sec_num": "4.1"
},
{
"text": "In a conversation setting, user inputs are often abbreviated. Users tend to only provide new information when it is their turn to interact. Sometimes, fusing individual modalities together still cannot provide overall meanings of those inputs. For example, after multimodal fusion, the conversation unit for U3 (\"What about this one\") does not give enough information on what the user exactly wants. The motivation and task of this input is not known as in Figure 11(a) . Only based on the conversation context, is MIND able to identify the overall meaning of this input. In this case, based on the most recent conversation segment (DS1) in Figure 9 (also as in Figure 11b ), MIND is able to derive Motivator and Method features from DS1 to update the conversation unit for U3 (Figure 11c ). As a result, this revised conversation unit provides the overall meaning that the user is interested in finding out the price information about another house MLS7689432. Note that it is important to maintain a hierarchical conversation history based on goals and subgoals. Without such a hierarchical structure, MIND would not be able to infer the motivation of U3. Furthermore, because of the consistent representation of intention and attention at both the discourse level (in conversation segments) and the input level (in conversation units), MIND is able to directly use conversation context to infer unspecified information and enhance interpretation.",
"cite_spans": [],
"ref_spans": [
{
"start": 457,
"end": 469,
"text": "Figure 11(a)",
"ref_id": null
},
{
"start": 641,
"end": 649,
"text": "Figure 9",
"ref_id": null
},
{
"start": 662,
"end": 672,
"text": "Figure 11b",
"ref_id": null
},
{
"start": 777,
"end": 788,
"text": "(Figure 11c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Deriving Unspecified Information",
"sec_num": "4.2"
},
{
"text": "In a multimodal environment, users could use different ways to coordinate their speech and gesture inputs. In some cases, one reference/object mentioned in the speech input coordinates with one deictic gesture (U1, U3). In other cases, several references/ objects in the speech input are coordinated with one deictic gesture (U4, U5). In the latter cases, only using time stamps often cannot accurately align and fuse the respective attention structures from each modality. Therefore, MIND uses contexts to improve alignment based on our semantics-based representation. For example, from the speech input in U4 (\"show me houses with this style around here\"), three Attention structures are generated as shown in Figure 8(a) . From the gesture input, only one Attention structure is generated which corresponds to the city of White Plains. Since the gesture input overlaps with both \"this style\" (corresponding to A2) and \"here\" (corresponding to A3), there is no obvious temporal relation indicating which of these two references should be unified with the deictic gesture. In fact, both A2 and A3 are potential candidates. Based on the domain context that a city cannot have a feature Style, MIND determines that the deictic gesture is actually resolving the refer- ence of \"here\". To resolve the reference of \"this style\", MIND uses the visual context which indicates a house is highlighted on the screen. A recent study (Kehler, 2000) shows that objects in the visual focus are often referred by pronouns, rather than by full noun phrases or deictic gestures. Based on this study, MIND is able to infer that most likely \"this style\" refers to the style of the highlighted house (MLS7689432). Suppose the style is \"Victorian\", then MIND is able to figure out that the overall meaning of U4 is looking for houses with a Victorian style and located in White Plains (as shown in Figure 8b ). Furthermore, for U5 (\"Comparing these two houses with the previous house\"), there are two Attention structures (A1 and A2) created for the speech input as in Figure 12 (a). A1 corresponds to \"these two houses\", where the Number feature in the reference constraint is set 2. Although there is only one deictic gesture which points to two potential houses (Figure 12b ), MIND is able to figure out that this deictic gesture is actually referring to a group of two houses rather than an ambiguous single house. Although the gesture input in U5 is the same kind as that in U1, because of the fine-grained information captured from the speech input (i.e., Number feature), MIND processes them differently. For the second reference of \"previous house\" (A2 in Figure 12a ), based on the information captured in the temporal constraint, MIND searches the conversation history and finds the most recent house explored (MLS7689432). Therefore, MIND is able to reach an overall understanding of U5 that the user is interested in comparing three houses (as in Figure 12c ).",
"cite_spans": [
{
"start": 1423,
"end": 1437,
"text": "(Kehler, 2000)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 712,
"end": 723,
"text": "Figure 8(a)",
"ref_id": null
},
{
"start": 1878,
"end": 1887,
"text": "Figure 8b",
"ref_id": null
},
{
"start": 2049,
"end": 2058,
"text": "Figure 12",
"ref_id": "FIGREF1"
},
{
"start": 2245,
"end": 2256,
"text": "(Figure 12b",
"ref_id": "FIGREF1"
},
{
"start": 2644,
"end": 2654,
"text": "Figure 12a",
"ref_id": "FIGREF1"
},
{
"start": 2939,
"end": 2949,
"text": "Figure 12c",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Improving Alignment",
"sec_num": "4.3"
},
{
"text": "To facilitate multimodal interpretation in conversational systems, we have developed a semantics-based representation to capture salient information from user inputs and the overall conversation. In this paper, we have presented three unique characteristics of our representation. First, our representation is based on fine grained semantic models of intention, attention and constraints that are important in information seeking conversation. Second, our representation is composed by a flexible combination of feature structures and thus supports complex user inputs. Third, our representation of intention and attention is consistent at different levels and therefore facilitates context-based interpretation. This semantics-based representation allows MIND to use contexts to resolve ambiguities, derive unspecified information and improve alignment. As a result, MIND is able to process a large variety of user inputs including those incomplete, ambiguous or complex ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The author would like to thank Shimei Pan and Michelle Zhou for their contributions on semantic models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Voice and gesture at the graphics interface",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bolt",
"suffix": ""
}
],
"year": 1980,
"venue": "Computer Graphics",
"volume": "",
"issue": "",
"pages": "262--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bolt, R. (1980) Voice and gesture at the graphics inter- face. Computer Graphics, pages 262-270.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The logic of typed feature structures",
"authors": [
{
"first": "R",
"middle": [],
"last": "Carpenter",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carpenter, R. (1992) The logic of typed feature struc- tures. Cambridge University Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "MIND: A Semantics-based multimodal interpretation framework for conversational systems",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "M",
"middle": [
"X"
],
"last": "Zhou",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of International CLASS Workshop on Natural, Intelligent and Effective Interaction in Multimodal Dialog Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chai, J.; Pan, S.; and Zhou, M. X. (2002) MIND: A Se- mantics-based multimodal interpretation framework for conversational systems. To appear in Proceedings of International CLASS Workshop on Natural, Intelli- gent and Effective Interaction in Multimodal Dialog Systems.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Quickset: Multimodal interaction for distributed applications",
"authors": [
{
"first": "P",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
},
{
"first": "D",
"middle": [
"S"
],
"last": "Mcgee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Oviatt",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pittman",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Clow",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. ACM MM'96",
"volume": "",
"issue": "",
"pages": "31--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, P.; Johnston, M.; McGee, D.; S. Oviatt, S.; Pitt- man, J.; Smith, I.; Chen, L; and Clow, J. (1996) Quick- set: Multimodal interaction for distributed applications. Proc. ACM MM'96, pages 31-40.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Attention, intentions, and the structure of discourse",
"authors": [
{
"first": "B",
"middle": [
"J"
],
"last": "Grosz",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Sidner",
"suffix": ""
}
],
"year": 1986,
"venue": "Computational Linguistics",
"volume": "12",
"issue": "3",
"pages": "175--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grosz, B. J. and Sidner, C. (1986) Attention, intentions, and the structure of discourse. Computational Linguis- tics, 12(3):175-204.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Decision tree parsing using a hidden derivation model",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Magerman",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. Darpa Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, F.; Lafferty, J.; Magerman, D. M.; Mercer, R. and Roukos, S. (1994) Decision tree parsing using a hidden derivation model. Proc. Darpa Speech and Nat- ural Language Workshop.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unification based multimodal integration",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
},
{
"first": "P",
"middle": [
"R"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mcgee",
"suffix": ""
},
{
"first": "S",
"middle": [
"L"
],
"last": "Oviatt",
"suffix": ""
},
{
"first": "J",
"middle": [
"A"
],
"last": "Pittman",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. 35th ACL",
"volume": "",
"issue": "",
"pages": "281--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johnston, M.; Cohen, P. R.; McGee, D.; Oviatt, S. L.; Pittman, J. A.; and Smith, I. (1997) Unification based multimodal integration. Proc. 35th ACL, pages 281- 288.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unification-based multimodal parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. COLING-ACL'98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johnston, M. (1998) Unification-based multimodal pars- ing. Proc. COLING-ACL'98.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Cognitive status and form of reference in multimodal human-computer interaction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kehler",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. AAAI'01",
"volume": "",
"issue": "",
"pages": "685--689",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kehler, A. (2000) Cognitive status and form of reference in multimodal human-computer interaction. Proc. AAAI'01, pages 685-689.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "User and discourse models for multimodal communication",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wahlster",
"suffix": ""
}
],
"year": 1998,
"venue": "Intelligent User Interfaces",
"volume": "",
"issue": "",
"pages": "359--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wahlster, W. (1998) User and discourse models for mul- timodal communication. In M. Maybury and W. Wahl- ster, editors, Intelligent User Interfaces, pages 359- 370.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multimodal interaction for information access: Exploiting cohesion",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zancanaro",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Stock",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Strapparava",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Intelligence",
"volume": "13",
"issue": "4",
"pages": "439--464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zancanaro, M.; Stock, O.; and Strapparava, C. (1997) Multimodal interaction for information access: Ex- ploiting cohesion. Computational Intelligence, 13(4):439-464.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automated authoring of coherent multimedia discourse for conversation systems",
"authors": [
{
"first": "M",
"middle": [
"X"
],
"last": "Zhou",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pan",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. ACM MM'01",
"volume": "",
"issue": "",
"pages": "555--559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou, M. X. and Pan, S. (2001) Automated authoring of coherent multimedia discourse for conversation sys- tems. Proc. ACM MM'01, pages 555-559.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 1. MIND components"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "A conversation fragment"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Intention and Attention for U1 unimodal inputs"
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Temporal and visual reference constraints"
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "\" the previous house\" (b) \" the green house\" ) or another object. Visual references describe entities on the graphic output using visual properties (such as displaying colors or shapes) or visual techniques (such as highlight). Manner of Comparative indicates a visual entity is compared with another value (captured by Anchor)."
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Attributive data constraints nation of different feature structures. Specifically, an attention structure may have a constraint structure as its feature, and on the other hand, a constraint structure may also include another attention structure."
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 10. Resolving ambiguity for U1"
}
}
}
}