ACL-OCL / Base_JSON /prefixN /json /N04 /N04-1005.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N04-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:45:09.640145Z"
},
"title": "Balancing Data-driven and Rule-based Approaches in the Context of a Multimodal Conversational System",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Labs-Research",
"location": {
"addrLine": "180 Park Avenue Florham Park",
"postCode": "07932",
"region": "NJ"
}
},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Johnston",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Labs-Research",
"institution": "",
"location": {
"addrLine": "180 Park Avenue Florham Park",
"postCode": "07932",
"region": "NJ"
}
},
"email": "johnston@research.att.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Moderate-sized rule-based spoken language models for recognition and understanding are easy to develop and provide the ability to rapidly prototype conversational applications. However, scalability of such systems is a bottleneck due to the heavy cost of authoring and maintenance of rule sets and inevitable brittleness due to lack of coverage in the rule sets. In contrast, data-driven approaches are robust and the procedure for model building is usually simple. However, the lack of data in a particular application domain limits the ability to build data-driven models. In this paper, we address the issue of combining data-driven and grammar-based models for rapid prototyping of robust speech recognition and understanding models for a multimodal conversational system. We also present methods that reuse data from different domains and investigate the limits of such models in the context of a particular application domain.",
"pdf_parse": {
"paper_id": "N04-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "Moderate-sized rule-based spoken language models for recognition and understanding are easy to develop and provide the ability to rapidly prototype conversational applications. However, scalability of such systems is a bottleneck due to the heavy cost of authoring and maintenance of rule sets and inevitable brittleness due to lack of coverage in the rule sets. In contrast, data-driven approaches are robust and the procedure for model building is usually simple. However, the lack of data in a particular application domain limits the ability to build data-driven models. In this paper, we address the issue of combining data-driven and grammar-based models for rapid prototyping of robust speech recognition and understanding models for a multimodal conversational system. We also present methods that reuse data from different domains and investigate the limits of such models in the context of a particular application domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the past four decades of speech and natural language processing, both data-driven approaches and rule-based approaches have been prominent at different periods in time. In the recent past, rule-based approaches have fallen into disfavor due to their brittleness and the significant cost of authoring and maintaining complex rule sets. Data-driven approaches are robust and provide a simple process of developing applications given the data from the application domain. However, the reliance on domain-specific data is also one of the significant bottlenecks of data-driven approaches. Development of a conversational system using data-driven approaches cannot proceed until data pertaining to the application domain is available. The collection and annotation of such data is extremely time-consuming and tedious, which is aggravated by the presence of multiple modalities in the user's input, as in our case. Also, extending an existing application to support an additional feature requires adding additional data sets with that feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we explore various methods for combining rule-based and in-domain data for rapid prototyping of speech recognition and understanding models that are robust to ill-formed or unexpected input in the context of a multimodal conversational system. We also investigate approaches to reuse out-of-domain data and compare their performance against the performance of in-domain data-driven models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We investigate these issues in the context of a multimodal application designed to provide an interactive city guide: MATCH. In Section 2, we present the MATCH application, the architecture of the system and the apparatus for multimodal understanding. In Section 3, we discuss various approaches to rapid prototyping of the language model for the speech recognizer and in Section 4 we present two approaches to robust multimodal understanding. Section 5 presents the results for speech recognition and multimodal understanding using the different approaches we consider.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "MATCH (Multimodal Access To City Help) is a working city guide and navigation system that enables mobile users to access restaurant and subway information for New York City (NYC) (Johnston et al., 2002b; Johnston et al., 2002a) . The user interacts with a graphical interface displaying restaurant listings and a dynamic map showing locations and street information. The inputs can be speech, drawing on the display with a stylus, or synchronous multimodal combinations of the two modes. The user can ask for the review, cuisine, phone number, address, or other information about restaurants and subway directions to locations. The system responds with graphical callouts on the display, synchronized with synthetic speech output. For example, if the user says phone numbers for these two restaurants and circles two restaurants as in Figure 1 [a], the system will draw a callout with the restaurant name and number and say, for example Time Cafe can be reached at 212-533-7000, for each restaurant in turn ( Figure 1 [b] ). If the immediate environment is too noisy or public, the same command can be given completely in pen by circling the restaurants and writing phone. ",
"cite_spans": [
{
"start": 179,
"end": 203,
"text": "(Johnston et al., 2002b;",
"ref_id": "BIBREF13"
},
{
"start": 204,
"end": 227,
"text": "Johnston et al., 2002a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 835,
"end": 843,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1009,
"end": 1021,
"text": "Figure 1 [b]",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The MATCH application",
"sec_num": "2"
},
{
"text": "The underlying architecture that supports MATCH consists of a series of re-usable components which communicate over sockets through a facilitator (MCUBE) (Figure 2) . Users interact with the system through a Multimodal User Interface Client (MUI). Their speech and ink are processed by speech recognition (Sharp et al., 1997) (ASR) and handwriting/gesture recognition (GESTURE, HW RECO) components respectively. These recognition processes result in lattices of potential words and gestures. These are then combined and assigned a meaning representation using a multimodal finite-state device (MMFST) Johnston et al., 2002b) . This provides as output a lattice encoding all of the potential meaning representations assigned to the user inputs. This lattice is flattened to an N-best list and passed to a multimodal dialog manager (MDM) (Johnston et al., 2002b) , which re-ranks them in accordance with the current dialogue state. If additional information or confirmation is required, the MDM enters into a short information gathering dialogue with the user. Once a command or query is complete, it is passed to the multimodal generation component (MMGEN), which builds a multimodal score indicating a coordinated sequence of graphical actions and TTS prompts. This score is passed back to the Multimodal UI (MUI). The Multimodal UI coordinates presentation of graphical content with synthetic speech output using the AT&T Natural Voices TTS engine (Beutnagel et al., 1999) . The subway route constraint solver (SUBWAY) identifies the best route between any two points in New York City. ",
"cite_spans": [
{
"start": 154,
"end": 164,
"text": "(Figure 2)",
"ref_id": null
},
{
"start": 305,
"end": 325,
"text": "(Sharp et al., 1997)",
"ref_id": "BIBREF21"
},
{
"start": 601,
"end": 624,
"text": "Johnston et al., 2002b)",
"ref_id": "BIBREF13"
},
{
"start": 836,
"end": 860,
"text": "(Johnston et al., 2002b)",
"ref_id": "BIBREF13"
},
{
"start": 1449,
"end": 1473,
"text": "(Beutnagel et al., 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MATCH Multimodal Architecture",
"sec_num": "2.1"
},
{
"text": "Our approach to integrating and interpreting multimodal inputs (Johnston et al., 2002b; Johnston et al., 2002a) is an extension of the finite-state approach previously proposed . In this approach, a declarative multimodal grammar captures both the structure and the interpreta-tion of multimodal and unimodal commands. The grammar consists of a set of context-free rules. The multimodal aspects of the grammar become apparent in the terminals, each of which is a triple W:G:M, consisting of speech (words, W), gesture (gesture symbols, G), and meaning (meaning symbols, M). The multimodal grammar encodes not just multimodal integration patterns but also the syntax of speech and gesture, and the assignment of meaning. The meaning is represented in XML, facilitating parsing and logging by other system components. The symbol SEM is used to abstract over specific content such as the set of points delimiting an area or the identifiers of selected objects. In Figure 3 , we present a small simplified fragment from the MATCH application capable of handling information seeking commands such as phone for these three restaurants. The epsilon symbol ( ) indicates that a stream is empty in a given terminal. Figure 4 : Multimodal Example In the example above where the user says phone for these two restaurants while circling two restaurants (Figure 1 [a] ), assume the speech recognizer returns the lattice in Figure 4 (Speech). The gesture recognition component also returns a lattice (Figure 4 , Gesture) indicating that the user's ink is either a selection of two restaurants or a geographical area. The multimodal grammar (Figure 3 ) expresses the relationship between what the user said, what they drew with the pen, and their combined meaning, in this case Figure 4 (Meaning). The meaning is generated by concatenating the meaning symbols and replacing SEM with the appropriate specific content:",
"cite_spans": [
{
"start": 63,
"end": 87,
"text": "(Johnston et al., 2002b;",
"ref_id": "BIBREF13"
},
{
"start": 88,
"end": 111,
"text": "Johnston et al., 2002a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 961,
"end": 969,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1207,
"end": 1215,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1341,
"end": 1354,
"text": "(Figure 1 [a]",
"ref_id": "FIGREF0"
},
{
"start": 1410,
"end": 1418,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1486,
"end": 1495,
"text": "(Figure 4",
"ref_id": null
},
{
"start": 1626,
"end": 1635,
"text": "(Figure 3",
"ref_id": null
},
{
"start": 1763,
"end": 1771,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multimodal Integration and Understanding",
"sec_num": "2.2"
},
{
"text": "\u00a2 cmd\u00a3 \u00a7 \u00a2 info\u00a3\u00a2 type\u00a3 phone \u00a2 /type\u00a3 \u00a7 \u00a2 obj\u00a3 \u00a2 rest\u00a3 [r12,r15] \u00a2 /rest\u00a3 \u00a9 \u00a2 /obj\u00a3 \u00a2 /info\u00a3 \u00a2 /cmd\u00a3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Integration and Understanding",
"sec_num": "2.2"
},
{
"text": "For the purpose of evaluation of concept accuracy, we developed an approach similar to (Boros et al., 1996) in which computing concept accuracy is reduced to comparing strings representing core contentful concepts. We extract a sorted flat list of attribute value pairs that represents the core contentful concepts of each command from the XML output. The example above yields the following meaning representation for concept accuracy.",
"cite_spans": [
{
"start": 87,
"end": 107,
"text": "(Boros et al., 1996)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Integration and Understanding",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9 ! \" \u00a5 # $ ! % \u00a9 ! & ( ' ) ! # \u00a6 \u00a5 0 ( 1 2 ! # 3 \u00a7 % % \u00a9",
"eq_num": "(1)"
}
],
"section": "Multimodal Integration and Understanding",
"sec_num": "2.2"
},
{
"text": "The multimodal grammar can be used to create language models for ASR, align the speech and gesture results from the respective recognizers and transform the multimodal utterance to a meaning representation. All these operations are achieved using finite-state transducer operations (See for details). However, this approach to recognition needs to be more robust to extragrammaticality and language variation in user's utterances and the interpretation needs to be more robust to speech recognition errors. We address these issues in the rest of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Integration and Understanding",
"sec_num": "2.2"
},
{
"text": "The problem of speech recognition can be succinctly represented as a search for the most likely word sequence (4 ) through the network created by the composition of a language of acoustic observations (5 ), an acoustic model which is a transduction from acoustic observations to phone sequences (6 ), a pronounciation model which is a transduction from phone sequences to word sequences 7, and a language model acceptor (8 ) (Pereira and ). The language model acceptor encodes the (weighted) word sequences permitted in an application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapping Corpora for Language Models",
"sec_num": "3"
},
{
"text": "9 @ B A C D 9 2 E F G I H P 5 R Q S 6 T Q S 7 U Q S 8 W V P 4 X V (2) Typically, 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapping Corpora for Language Models",
"sec_num": "3"
},
{
"text": "is built using either a hand-crafted grammar or using a statistical language model derived from a corpus of sentences from the application domain. While a grammar could be written so as to be easily portable across applications, it suffers from being too prescriptive and has no metric for relative likelihood of users' utterances. In contrast, in the data-driven approach a weighted grammar is automatically induced from a corpus and the weights can be interpreted as a measure for relative likelihood of users' utterances. However, the reliance on a domain-specific corpus is one of the significant bottlenecks of data-driven approaches, since collecting a corpus specific to a domain is an expensive and time-consuming task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapping Corpora for Language Models",
"sec_num": "3"
},
{
"text": "In this section, we investigate a range of techniques for producing a domain-specific corpus using resources such as a domain-specific grammar as well as an out-ofdomain corpus. We refer to the corpus resulting from such techniques as a domain-specific derived corpus in contrast to a domain-specific collected corpus. The idea is that the derived domain-specific corpus would obviate the need for in-domain corpus collection. In particular, we are interested in techniques that would result in corpora such that the performance of language models trained on these corpora would rival the performance of models trained on corpora collected specifically for a specific domain. We investigate these techniques in the context of MATCH.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapping Corpora for Language Models",
"sec_num": "3"
},
{
"text": "We use the notation ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapping Corpora for Language Models",
"sec_num": "3"
},
{
"text": "In order to evaluate the MATCH system, we collected a corpus of multimodal utterances for the MATCH domain in a laboratory setting from a set of sixteen first time users (8 male, 8 female). We use this corpus to establish a point of reference to compare the models trained on derived corpora against models trained on an in-domain corpus. A total of 833 user interactions (218 multimodal / 491 speech-only / 124 pen-only) resulting from six sample task scenarios involving finding restaurants of various types and getting their names, phones, addresses, or reviews, and getting subway directions between locations were collected and annotated. The data collected was conversational speech where the users gestured and spoke freely. We built a class-based trigram language model (b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model using in-domain corpus",
"sec_num": "3.1"
},
{
"text": "I g \" h p i \u00a4 q s r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model using in-domain corpus",
"sec_num": "3.1"
},
{
"text": ") using the 709 multimodal and speech-only utterances as the corpus (Y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model using in-domain corpus",
"sec_num": "3.1"
},
{
"text": "a g \" h p i \u00a4 q s r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model using in-domain corpus",
"sec_num": "3.1"
},
{
"text": "). The performance of this model serves as the point of reference to compare the performance of language models trained on derived corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model using in-domain corpus",
"sec_num": "3.1"
},
{
"text": "The multimodal CFG (a fragment is presented in Section 2) encodes the repertoire of language and gesture commands allowed by the system and their combined interpretations. The CFG can be approximated by an FSM with arcs labeled with language, gesture and meaning symbols, using well-known compilation techniques (Nederhof, 1997) . The resulting FSM can be projected on the language component and can be used as the language model acceptor (8",
"cite_spans": [
{
"start": 312,
"end": 328,
"text": "(Nederhof, 1997)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar as Language Model",
"sec_num": "3.2"
},
{
"text": "W t v u x w y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar as Language Model",
"sec_num": "3.2"
},
{
"text": ") for speech recognition. Note that the resulting language model acceptor is unweighted if the grammar is unweighted and suffers from not being robust to language variations in user's input. However, due to the tight coupling of the grammar used for recognition and interpretion, every recognized string can be assigned an interpretation (though it may not necessarily be the intended interpretation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar as Language Model",
"sec_num": "3.2"
},
{
"text": "As mentioned earlier, a hand-crafted grammar typically suffers from the problem of being too restrictive and inadequate to cover the variations and extra-grammaticality of user's input. In contrast, an N-gram language model derives its robustness by permitting all strings over an alphabet, albeit with different likelihoods. In an attempt to provide robustness to the grammar-based model, we created a corpus (Y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar-based N-gram Language Model",
"sec_num": "3.3"
},
{
"text": "t v u x w v y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar-based N-gram Language Model",
"sec_num": "3.3"
},
{
"text": ") of sentences by randomly sampling the set of paths of the grammar (7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar-based N-gram Language Model",
"sec_num": "3.3"
},
{
"text": "P V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar-based N-gram Language Model",
"sec_num": "3.3"
},
{
"text": ") and built a class-based N-gram language model(b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar-based N-gram Language Model",
"sec_num": "3.3"
},
{
"text": "t v u",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar-based N-gram Language Model",
"sec_num": "3.3"
},
{
"text": "x w y ) using this corpus. Although this corpus might not represent the true distribution of sentences in the MATCH domain, we are able to derive some of the benefits of N-gram language modeling techniques. This technique is similar to Galescu et.al (1998) .",
"cite_spans": [
{
"start": 236,
"end": 256,
"text": "Galescu et.al (1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar-based N-gram Language Model",
"sec_num": "3.3"
},
{
"text": "A straightforward extension of the idea of sampling the grammar in order to create a corpus is to select those sentences out of the grammar which make the resulting corpus \"similar\" to the corpus collected in the pilot studies. In order to create this corpus, we choose the most likely sentences as determined by a language model (b ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Grammar and Corpus",
"sec_num": "3.4"
},
{
"text": "Y \u00a5 \u00a7 \u00a9 ! # \" % $ ' & ( & ' & 0 ) \u00a5 2 1 7 P V (3) \u00a6 3 @ ! 4 6 5 @ % 5 7 4 9 8 A @ C B @ d ! D F E 6 G I H Q P P R V A S b I y \u00a1 \u00a3 \u00a2 \u00a4 U T b \u00a6 \u00a5 \u00a7 \u00a3 \u00a9 2 V P X W \u1ef2 \u00a4 V # T b \u00a4 g \" h p i I q r (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Grammar and Corpus",
"sec_num": "3.4"
},
{
"text": "An alternative to using in-domain corpora for building language models is to \"migrate\" a corpus of a different domain to the MATCH domain. The process of migrating a corpus involves suitably generalizing the corpus to remove information specific only to the out-of-domain and instantiating the generalized corpus to the MATCH domain. Although there are a number of ways of generalizing the out-of-domain corpus, the generalization we have investigated involved identifying linguistic units, such as noun and verb chunks in the out-of-domain corpus and treating them as classes. These classes are then instantiated to the corresponding linguistic units from the MATCH domain. The identification of the linguistic units in the out-of-domain corpus is done automatically using a supertagger (Bangalore and Joshi, 1999) . We use a corpus collected in the context of a software helpdesk application as an example out-of-domain corpus. In cases where the out-of-domain corpus is closely related to the domain at hand, a more semantically driven generalization might be more suitable.",
"cite_spans": [
{
"start": 788,
"end": 815,
"text": "(Bangalore and Joshi, 1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Class-based Out-of-domain Language Model",
"sec_num": "3.5"
},
{
"text": "We investigate the performance of a large vocabulary conversational speech recognition system when applied to a specific domain such as MATCH. We used the Switchboard corpus (\u1ef2 ) . This bootstrapping mechanism can be used to derive an domain-specific corpus and language model without any transcriptions. Similar techniques for unsupervised language model adaptation are presented in (Bacchiani and Roark, 2003; Souvignier and Kellner, 1998) .",
"cite_spans": [
{
"start": 384,
"end": 411,
"text": "(Bacchiani and Roark, 2003;",
"ref_id": "BIBREF1"
},
{
"start": 412,
"end": 441,
"text": "Souvignier and Kellner, 1998)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting the SwitchBoard Language Model",
"sec_num": "3.6"
},
{
"text": "Y w v w b e c % \" $ A H $ ( & ' & ( & d $ ) S (5) 9 @ A C 9 E f G H P 5 g Q S 6 Q S 7 U Q S 8 F 0 aV P \u00a8 V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting the SwitchBoard Language Model",
"sec_num": "3.6"
},
{
"text": "3.7 Adapting a wide-coverage grammar There have been a number of computational implementations of wide-coverage, domain-independent, syntactic grammars for English in various formalisms (XTAG, 2001; Clark and Hockenmaier, 2002; Flickinger et al., 2000) . Here, we describe a method that exploits one such grammar implementation in the Lexicalized Tree-Adjoining Grammar (LTAG) formalism, for deriving domain-specific corpora. An LTAG consists of a set of elementary trees (Supertags) (Bangalore and Joshi, 1999) each associated with a lexical item. The set of sentences generated by an LTAG can be obtained by combining supertags using substitution and adjunction operations. In related work (Rambow et al., 2002) , it has been shown that for a restricted version of LTAG, the combinations of a set of supertags can be represented as an FSM. This FSM compactly encodes the set of sentences generated by an LTAG grammar. We derive a domain-specific corpus by constructing a lexicon consisting of pairings of words with their supertags that are relevant to that domain. We then compile the grammar to build an FSM of all sentences upto a given length. We sample this FSM and build a language model as discussed in Section 3.3. Given untranscribed utterances from a specific domain, we can also adapt the language model as discussed in Section 3.6.",
"cite_spans": [
{
"start": 186,
"end": 198,
"text": "(XTAG, 2001;",
"ref_id": "BIBREF25"
},
{
"start": 199,
"end": 227,
"text": "Clark and Hockenmaier, 2002;",
"ref_id": "BIBREF6"
},
{
"start": 228,
"end": 252,
"text": "Flickinger et al., 2000)",
"ref_id": "BIBREF8"
},
{
"start": 484,
"end": 511,
"text": "(Bangalore and Joshi, 1999)",
"ref_id": "BIBREF3"
},
{
"start": 692,
"end": 713,
"text": "(Rambow et al., 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting the SwitchBoard Language Model",
"sec_num": "3.6"
},
{
"text": "The grammar-based interpreter uses composition operation on FSTs to transduce multimodal strings (gesture,speech) to an interpretation. The set of speech strings that can be assigned an interpretation are exactly those that are represented in the grammar. It is to be expected that the accuracy of meaning representation will be reasonable, if the user's input matches one of the multimodal strings encoded in the grammar. But for those user inputs that are not encoded in the grammar, the system will not return a meaning representation. In order to improve the usability of the system, we expect it to produce a (partial) meaning representation, irrespective of the grammaticality of the user's input and the coverage limitations of the grammar. It is this aspect that we refer to as robustness in understanding. We present below two approaches to robust multimodal understanding that we have developed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Multimodal Understanding",
"sec_num": "4"
},
{
"text": "In order to overcome the possible mismatch between the user's input and the language encoded in the multimodal grammar (b ! t ), we use an edit-distance based pattern matching algorithm to coerce the set of strings (g ) encoded in the lattice resulting from ASR (b \u00a6 h ) to match one of the strings that can be assigned an interpretation. The edit operations (insertion, deletion, substitution) can either be word-based or phone-based and are associated with a cost. These costs can be tuned based on the word/phone confusions present in the domain. The edit operations are encoded as an transducer (b ` A i c ) as shown in Figure 5 and can apply to both one-best and lattice output of the recognizer. We are interested in the string with the least number of edits (",
"cite_spans": [],
"ref_spans": [
{
"start": 624,
"end": 632,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Pattern Matching Approach",
"sec_num": "4.1"
},
{
"text": "9 @ A C q p s r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Matching Approach",
"sec_num": "4.1"
},
{
"text": ") that can be assigned an interpretation by the grammar. This can be achieved by composition (Q ) of transducers followed by a search for the least cost path through a weighted transducer as shown below. This approach is akin to example-based techniques used in other areas of NLP such as machine translation. In our case, the set of examples (encoded by the grammar) is represented as a finite-state machine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Matching Approach",
"sec_num": "4.1"
},
{
"text": "\u00a2 \u00a1 9 @ A C U p s r \u00a4 \u00a3 h b h Q b \u00a6 ` A i c Q b ! t (6) w j i w : /scost i w : /0 w i i w :\u03b5 /dcost i w : \u03b5 /icost",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Matching Approach",
"sec_num": "4.1"
},
{
"text": "A second approach is to view robust multimodal understanding as a sequence of classification problems in order to determine the predicate and arguments of an utterance. The meaning representation shown in (1) consists of an predicate (the command attribute) and a sequence of one or more argument attributes which are the parameters for the successful interpretation of the user's intent. For example, in (1),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based Approach",
"sec_num": "4.2"
},
{
"text": "\u00a1 \u00a4 \u00a3 \u00a6 \u00a5 ! \u00a7 B \u00a9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based Approach",
"sec_num": "4.2"
},
{
"text": "is the predicate and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based Approach",
"sec_num": "4.2"
},
{
"text": "2 ! \u00a5 % $ ! ( \u00a9 ! e ( ' ! ) \u00a8 # \u00a5 0 ( 1 ! 3 \u00a7 % ( \u00a9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based Approach",
"sec_num": "4.2"
},
{
"text": "is the set of arguments to the predicate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based Approach",
"sec_num": "4.2"
},
{
"text": "We determine the predicate ( \u00a7 \u00a1 ) for a\u00a8token multimodal utterance (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based Approach",
"sec_num": "4.2"
},
{
"text": ") by maximizing the posterior probability as shown in Equation 7. \u00a7 \u00a1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 \"",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "9 @ A C D 9 E \u00a5 B @ P \u00a7 \u00a5 \u00a9 \" V",
"eq_num": "(7)"
}
],
"section": "\u00a9 \"",
"sec_num": null
},
{
"text": "We view the problem of identifying and extracting arguments from a multimodal input as a problem of associating each token of the input with a specific tag that encodes the label of the argument and the span of the argument. These tags are drawn from a tagset which is constructed by extending each argument label by three additional symbols $ 5 $ \u00a4 , following (Ramshaw and Marcus, 1995) . These symbols correspond to cases when a token is inside ( ) an argument span, outside (5 ) an argument span or at the boundary of two argument spans ( ) (See Table 1 ). Given this encoding, the problem of extracting the arguments is a search for the most likely sequence of tags ( \u00a1 ) given the input multimodal utterance",
"cite_spans": [
{
"start": 362,
"end": 388,
"text": "(Ramshaw and Marcus, 1995)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 550,
"end": 557,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u00a9 \"",
"sec_num": null
},
{
"text": "as shown in Equation (8). We approximate the posterior probability shown in Equation 9. \u00a1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 \"",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "9 @ B A C D 9 2 E i B @ P \u00a5 \u00a9 \" V (8) 9 @ B A C D 9 2 E i B @ P \u00a5 \" ! $ $ # % ! & # \" $ # \" $ ' \" ! $ ( ) H V",
"eq_num": "(9)"
}
],
"section": "\u00a9 \"",
"sec_num": null
},
{
"text": "Owing to the large set of features that are used for predicate identification and argument extraction, we estimate the probabilities using a classification model. In particular, we use the Adaboost classifier (Freund and Schapire, 1996) wherein a highly accurate classifier is build by combining many \"weak\" or \"simple\" base classifiers 0 % , each of which may only be moderately accurate. The selection of the weak classifiers proceeds iteratively picking the weak classifier that correctly classifies the examples that are misclassified by the previously selected weak classifiers. Each weak classifier is associated with a weight (4 ) that reflects its contribution towards minimizing the classification error. The posterior probability of",
"cite_spans": [
{
"start": 209,
"end": 236,
"text": "(Freund and Schapire, 1996)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 \"",
"sec_num": null
},
{
"text": "B @ P \u00a7 \u00a5 E V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 \"",
"sec_num": null
},
{
"text": "is computed as in Equation 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 \"",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "B @ P \u00a7 \u00a5 E V W P W V 5 H \u00a1 2 1 4 3 F 3 \u00a1 \u00a4 5 3 7 6\u00a2 9 8 V",
"eq_num": "(10)"
}
],
"section": "\u00a9 \"",
"sec_num": null
},
{
"text": "It should be noted that the data for training the classifiers can be collected from the domain or derived from an in-domain grammar using techniques similar to those presented in Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 \"",
"sec_num": null
},
{
"text": "We describe a set of experiments to evaluate the performance of the speech recognizer and the concept accuracy of speech only and speech and gesture exchanges in our MATCH multimodal system. We use word accuracy and string accuracy for evaluating ASR output. All results presented in this section are based on 10-fold crossvalidation experiments run on the 709 spoken and multimodal exchanges collected from the pilot study described in Section 3.1. Table 2 presents the performance results for ASR word and sentence accuracy using language models trained on collected in-domain corpus as well as on corpora derived using the different methods discussed in Section 3. For the class-based models mentioned in the It is immediately apparent that the hand-crafted grammar as language model performs poorly and a language model trained on the collected domain-specific corpus performs significantly better than models trained on derived data. However, it is encouraging to note that a model trained on a derived corpus (obtained from combining migrated out-of-domain corpus and a corpus created by sampling in-domain grammar) is within 10% word accuracy as compared to the model trained on the collected corpus. There are several other noteworthy observations from these experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 450,
"end": 457,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "The performance of the language model trained on data sampled from the grammar is dramatically better as compared to the performance of the hand-crafted grammar. This technique provides a promising direction for authoring portable grammars that can be sampled subsequently to build robust language models when no in-domain corpora are available. Furthermore, combining grammar and in-domain data as described in Section 3.4, outperforms all other models significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model",
"sec_num": "5.1"
},
{
"text": "For the experiment on migration of out-of-domain corpus, we used a corpus from a software helpdesk application. Table 2 shows that the migration of data using linguistic units as described in Section 3.5 significantly outperforms a model trained only on the out-of-domain corpus. Also, combining the grammar sampled corpus with the migrated corpus provides a further improvement.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 119,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Model",
"sec_num": "5.1"
},
{
"text": "The performance of the SwitchBoard model on the MATCH domain is presented in Table 2 . We built a trigram model using a 5.4 million word SwitchBoard corpus and investigated the effect of adapting the resulting language model on in-domain untranscribed speech utterances. The adaptation is done by first recognizing the training partition of the in-domain speech utterances and then building a language model from the recognized text.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Model",
"sec_num": "5.1"
},
{
"text": "We observe that although the performance of the Switch-Board language model on the MATCH domain is poorer than the performance of a model obtained by migrating data from a related domain, the performance can be significantly improved using the adaptation technique.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model",
"sec_num": "5.1"
},
{
"text": "The last row of Table 2 shows the results of using the MATCH specific lexicon to generate a corpus using a wide-coverage grammar, training a language model and adapting the resulting model using in-domain untranscribed speech utterances as was done for the Switch-Board model. The class-based trigram model was built using 500,000 randomly sampled paths from the network constructed by the procedure described in Section 3.7.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Model",
"sec_num": "5.1"
},
{
"text": "In this section, we present results on multimodal understanding using the two techniques presented in Section 4. We use concept token accuracy and concept string accuracy as evaluation metrics for the entire meaning representation in these experiments. These metrics correspond to the word accuracy and string accuracy metrics used for ASR evaluation. In order to provide a finer-grained evaluation, we breakdown the concept accuracy in terms of the accuracy of identifying the predicates and arguments. Again, we use string accuracy metrics to evaluate predicate and argument accuracy. We use the output of the ASR with the language model trained on the collected data (word accuracy of 73.8%) as the input to the understanding component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Understanding",
"sec_num": "5.2"
},
{
"text": "The grammar-based multimodal understanding system composes the input multimodal string with the multimodal grammar represented as an FST to produce an interpretation. Thus an interpretation can be assigned to only those multimodal strings that are encoded in the grammar. However, the result of ASR and gesture recognition may not be one of the strings encoded in the grammar, and such strings are not assigned an interpretation. This fact is reflected in the low concept string accuracy Table 3 : Performance results of robust multimodal understanding for the baseline as shown in Table 3 . The pattern-matching based robust understanding approach mediates the mismatch between the strings that are output by ASR and the strings that can be assigned an interpretation. We experimented with word based pattern matching as well as phone based pattern matching on the one-best output of the recognizer. As shown in Table 3 , the pattern-matching robust understanding approach improves the concept accuracy over the baseline significantly. Furthermore, the phone-based matching method has a similar performace to the word-based matching method.",
"cite_spans": [],
"ref_spans": [
{
"start": 488,
"end": 495,
"text": "Table 3",
"ref_id": null
},
{
"start": 582,
"end": 589,
"text": "Table 3",
"ref_id": null
},
{
"start": 913,
"end": 920,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multimodal Understanding",
"sec_num": "5.2"
},
{
"text": "For the classification-based approach to robust understanding we used a total of 10 predicates such as help, assert, inforequest, and 20 argument types such as cuisine, price, location . We use unigrams, bigrams and trigrams appearing in the multimodal utterance as weak classifiers for the purpose of predicate classification. In order to predict the tag of a word for argument extraction, we use the left and right trigram context and the tags for the preceding two tokens as weak classifiers. The results are presented in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 525,
"end": 532,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multimodal Understanding",
"sec_num": "5.2"
},
{
"text": "Both the approaches to robust understanding outperform the baseline model significantly. However, it is interesting to note that while the pattern-matching based approach has a better argument extraction accuracy, the classification based approach has a better predicate identification accuracy. Two possible reasons for this are: first, argument extraction requires more non-local information that is available in the pattern-matching based approach while the classification-based approach relies on local information and is more conducive for identifying the simple predicates in MATCH. Second, the patternmatching approach uses the entire grammar as a model for matching while the classification approach is trained on the training data which is significantly smaller when compared to the number of examples encoded in the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Understanding",
"sec_num": "5.2"
},
{
"text": "Although we are not aware of any attempts to address the issue of robust understanding in the context of multimodal systems, this issue has been of great interest in the context of speech-only conversational systems (Dowding et al., 1993; Seneff, 1992; Allen et al., 2000; Lavie, 1996) . The output of the recognizer in these systems usually is parsed using a handcrafted grammar that assigns a meaning representation suited for the downstream dialog component. The coverage problems of the grammar and parsing of extra-grammatical utterances is typically addressed by retrieving fragments from the parse chart and incorporating operations that combine fragments to derive a meaning of the recognized utterance. We have presented an approach that achieves robust multimodal utterance understanding using the edit-distance automaton in a finite-state-based interpreter without the need for combining fragments from a parser.",
"cite_spans": [
{
"start": 216,
"end": 238,
"text": "(Dowding et al., 1993;",
"ref_id": "BIBREF7"
},
{
"start": 239,
"end": 252,
"text": "Seneff, 1992;",
"ref_id": "BIBREF20"
},
{
"start": 253,
"end": 272,
"text": "Allen et al., 2000;",
"ref_id": "BIBREF0"
},
{
"start": 273,
"end": 285,
"text": "Lavie, 1996)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The issue of combining rule-based and data-driven approaches has received less attention, with the exception of a few (Wang et al., 2000; Rayner and Hockey, 2003; Wang and Acero, 2003) . In a recent paper (Rayner and Hockey, 2003) , the authors address this issue by employing a decision-list-based speech understanding system as a means of progressing from rule-based models to data-driven models when data becomes available. The decision-list-based understanding system also provides a method for robust understanding. In contrast, the approach presented in this paper can be used on lattices of speech and gestures to produce a lattice of meaning representations.",
"cite_spans": [
{
"start": 118,
"end": 137,
"text": "(Wang et al., 2000;",
"ref_id": "BIBREF24"
},
{
"start": 138,
"end": 162,
"text": "Rayner and Hockey, 2003;",
"ref_id": "BIBREF19"
},
{
"start": 163,
"end": 184,
"text": "Wang and Acero, 2003)",
"ref_id": "BIBREF23"
},
{
"start": 205,
"end": 230,
"text": "(Rayner and Hockey, 2003)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In this paper, we have addressed how to rapidly prototype multimodal conversational systems without relying on the collection of domain-specific corpora. We have presented several techniques that exploit domain-specific grammars, reuse out-of-domain corpora and adapt large conversational corpora and wide-coverage grammars to derive a domain-specific corpus. We have demonstrated that a language model trained on a derived corpus performs within 10% word accuracy of a language model trained on collected domain-specific corpus, suggesting a method of building an initial language model without having to collect domain-specific corpora. We have also presented and evaluated pattern-matching and classification-based approaches to improve the robustness of multimodal understanding. We have presented results for these approaches in the context of a multimodal city guide application (MATCH).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "We thank Patrick Ehlen, Amanda Stent, Helen Hastie, Candy Kamm, Marilyn Walker, and Steve Whittaker for their contributions to the MATCH system. We also thank Allen Gorin, Mazin Rahim, Giuseppe Riccardi, and Juergen Schroeter for their comments on earlier versions of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An architecture for a generic dialogue shell",
"authors": [
{
"first": "J",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Byron",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dzikovska",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ferguson",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Galescu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stent",
"suffix": ""
}
],
"year": 2000,
"venue": "JNLE",
"volume": "6",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Allen, D. Byron, M. Dzikovska, G. Ferguson, L. Galescu, and A. Stent. 2000. An architecture for a generic dialogue shell. JNLE, 6(3).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised language model adaptation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bacchiani",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. Int. Conf. Acoustic,Speech,Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Bacchiani and B. Roark. 2003. Unsupervised lan- guage model adaptation. In In Proc. Int. Conf. Acous- tic,Speech,Signal Processing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Tight-coupling of multimodal language processing with speech recognition",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bangalore and M. Johnston. 2000. Tight-coupling of multimodal language processing with speech recogni- tion. In Proceedings of ICSLP, Beijing, China.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Supertagging: An approach to almost parsing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bangalore and A. K. Joshi. 1999. Supertagging: An approach to almost parsing. Computational Linguis- tics, 25(2).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The AT&T next-generation TTS",
"authors": [
{
"first": "M",
"middle": [],
"last": "Beutnagel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Conkie",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schroeter",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Stylianou",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Syrdal",
"suffix": ""
}
],
"year": 1999,
"venue": "Joint Meeting of ASA; EAA and DAGA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Beutnagel, A. Conkie, J. Schroeter, Y. Stylianou, and A. Syrdal. 1999. The AT&T next-generation TTS. In In Joint Meeting of ASA; EAA and DAGA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Towards Understanding Spontaneous Speech: Word Accuracy vs. Concept Accuracy",
"authors": [
{
"first": "M",
"middle": [],
"last": "Boros",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Eckert",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Gallwitz",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "G\u020frz",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hanrieder",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Niemann",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Boros, W. Eckert, F. Gallwitz, G. G\u020frz, G. Hanrieder, and H. Niemann. 1996. Towards Understanding Spon- taneous Speech: Word Accuracy vs. Concept Accu- racy. In Proceedings of ICSLP, Philadelphia.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Evaluating a wide-coverage CCG parser",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the LREC 2002 Beyond Parseval Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and Julia Hockenmaier. 2002. Evaluating a wide-coverage CCG parser. In Proceedings of the LREC 2002 Beyond Parseval Workshop, Las Palmas, Spain.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "GEM-INI: A natural language system for spoken-language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Dowding",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Gawron",
"suffix": ""
},
{
"first": "D",
"middle": [
"E"
],
"last": "Appelt",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bear",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Cherny",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Moran",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "54--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Dowding, J. M. Gawron, D. E. Appelt, J. Bear, L. Cherny, R. Moore, and D. B. Moran. 1993. GEM- INI: A natural language system for spoken-language understanding. In Proceedings of ACL, pages 54-61.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hpsg analysis of english",
"authors": [
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sag",
"suffix": ""
}
],
"year": 2000,
"venue": "Verbmobil: Foundations of Speech-to-Speech Translation",
"volume": "",
"issue": "",
"pages": "254--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Flickinger, A. Copestake, and I. Sag. 2000. Hpsg analysis of english. In W. Wahlster, editor, Verbmobil: Foundations of Speech-to-Speech Translation, pages 254-263. Springer-Verlag, Berlin, Heidelberg, New York.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Experiments with a new boosting alogrithm",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Freund",
"suffix": ""
},
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
}
],
"year": 1996,
"venue": "Machine Learning: Proceedings of the Thirteenth International Conference",
"volume": "",
"issue": "",
"pages": "148--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Freund and R. E. Schapire. 1996. Experiments with a new boosting alogrithm. In Machine Learning: Pro- ceedings of the Thirteenth International Conference, pages 148-156.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Rapid language model development for new task domains",
"authors": [
{
"first": "L",
"middle": [],
"last": "Galescu",
"suffix": ""
},
{
"first": "E",
"middle": [
"K"
],
"last": "Ringger",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Allen",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the ELRA First International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Galescu, E. K. Ringger, and J. F. Allen. 1998. Rapid language model development for new task domains. In Proceedings of the ELRA First International Confer- ence on Language Resources and Evaluation (LREC), Granada, Spain.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Finite-state multimodal parsing and understanding",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Johnston and S. Bangalore. 2000. Finite-state mul- timodal parsing and understanding. In Proceedings of COLING, Saarbr\u00fccken, Germany.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multimodal language processing for mobile information access",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Vasireddy",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ehlen",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of IC-SLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Johnston, S. Bangalore, A. Stent, G. Vasireddy, and P. Ehlen. 2002a. Multimodal language processing for mobile information access. In In Proceedings of IC- SLP, Denver, CO.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "MATCH: An architecture for multimodal dialog systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Vasireddy",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ehlen",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Whittaker",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Maloor",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Johnston, S. Bangalore, G. Vasireddy, A. Stent, P. Ehlen, M. Walker, S. Whittaker, and P. Maloor. 2002b. MATCH: An architecture for multimodal di- alog systems. In Proceedings of ACL, Philadelphia.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "GLR*: A Robust Grammar-Focused Parser for Spontaneously Spoken Language",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Lavie. 1996. GLR*: A Robust Grammar-Focused Parser for Spontaneously Spoken Language. Ph.D. thesis, Carnegie Mellon University.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Regular approximations of CFLs: A grammatical view",
"authors": [
{
"first": "M-J",
"middle": [],
"last": "Nederhof",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M-J. Nederhof. 1997. Regular approximations of CFLs: A grammatical view. In Proceedings of the Interna- tional Workshop on Parsing Technology, Boston.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Speech recognition by composition of weighted finite automata",
"authors": [
{
"first": "C",
"middle": [
"N"
],
"last": "Fernando",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"D"
],
"last": "Pereira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Riley",
"suffix": ""
}
],
"year": 1997,
"venue": "Finite State Devices for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "431--456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando C.N. Pereira and Michael D. Riley. 1997. Speech recognition by composition of weighted finite automata. In E. Roche and Schabes Y., editors, Finite State Devices for Natural Language Processing, pages 431-456. MIT Press, Cambridge, Massachusetts.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Creating a finitestate parser with application semantics",
"authors": [
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Tahir",
"middle": [],
"last": "Butt",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Nasr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Owen Rambow, Srinivas Bangalore, Tahir Butt, Alexis Nasr, and Richard Sproat. 2002. Creating a finite- state parser with application semantics. In In Proceed- ings of the 19th International Conference on Compu- tational Linguistics (COLING 2002), Taipei, Taiwan.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Text chunking using transformation-based learning",
"authors": [
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Third Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lance Ramshaw and Mitchell P. Marcus. 1995. Text chunking using transformation-based learning. In Pro- ceedings of the Third Workshop on Very Large Cor- pora, MIT, Cambridge, Boston.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Transparent combination of rule-based and data-driven approaches in speech understanding",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rayner",
"suffix": ""
},
{
"first": "B",
"middle": [
"A"
],
"last": "Hockey",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Rayner and B. A. Hockey. 2003. Transparent com- bination of rule-based and data-driven approaches in speech understanding. In In Proceedings of the EACL 2003.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A relaxation method for understanding spontaneous speech utterances",
"authors": [
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings, Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Seneff. 1992. A relaxation method for understand- ing spontaneous speech utterances. In Proceedings, Speech and Natural Language Workshop, San Mateo, CA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The Watson speech recognition engine",
"authors": [
{
"first": "R",
"middle": [
"D"
],
"last": "Sharp",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bocchieri",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Castillo",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Parthasarathy",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Rath",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rowland",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ICASSP",
"volume": "",
"issue": "",
"pages": "4065--4068",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.D. Sharp, E. Bocchieri, C. Castillo, S. Parthasarathy, C. Rath, M. Riley, and J.Rowland. 1997. The Wat- son speech recognition engine. In In Proceedings of ICASSP, pages 4065-4068.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Online adaptation for language models in spoken dialogue systems",
"authors": [
{
"first": "B",
"middle": [],
"last": "Souvignier",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kellner",
"suffix": ""
}
],
"year": 1998,
"venue": "Int. Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Souvignier and A. Kellner. 1998. Online adaptation for language models in spoken dialogue systems. In Int. Conference on Spoken Language Processing (IC- SLP).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Combination of cfg and n-gram modeling in semantic grammar learning",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Acero",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Eurospeech Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Wang and A. Acero. 2003. Combination of cfg and n-gram modeling in semantic grammar learning. In In Proceedings of the Eurospeech Conference, Geneva, Switzerland.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Unified Context-Free Grammar and N-Gram Model for Spoken Language Processing",
"authors": [
{
"first": "Y",
"middle": [
"Y"
],
"last": "Wang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mahajan",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y.Y. Wang, M. Mahajan, and X. Huang. 2000. Unified Context-Free Grammar and N-Gram Model for Spo- ken Language Processing. In Proceedings of ICASSP.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A lexicalized tree-adjoining grammar for english",
"authors": [
{
"first": "",
"middle": [],
"last": "Xtag",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "XTAG. 2001. A lexicalized tree-adjoining grammar for english. Technical report, University of Pennsylvania, http://www.cis.upenn.edu/ xtag/gramrelease.html.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Two area gestures",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Figure 2: Multimodal Architecture",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "used in Equation 2 above.",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "example of a large vocabulary conversational speech corpus. We built a trigram model (b adaptation is done by first recognizing the in-domain speech utterances and then building a language model (b",
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"text": "Edit transducer with insertion, deletion, substitution and identity arcs. or phones. The costs on the arcs are set up such that scost < icost + dcost.",
"uris": null,
"type_str": "figure"
},
"TABREF2": {
"num": null,
"html": null,
"content": "<table><tr><td/><td>Scenario</td><td colspan=\"2\">ASR Word Accuracy Sentence Accuracy</td></tr><tr><td>Grammar Based</td><td>Grammar as Language Model</td><td>41.6</td><td>38.0</td></tr><tr><td/><td>Class-based N-gram Language Model</td><td>60.6</td><td>42.9</td></tr><tr><td>In-domain Data</td><td>Class-based N-gram Model</td><td>73.8</td><td>57.1</td></tr><tr><td colspan=\"2\">Grammar+In-domain Data Class-based N-gram Model</td><td>75.0</td><td>59.5</td></tr><tr><td>Out-of-domain</td><td>N-gram Model</td><td>17.6</td><td>17.5</td></tr><tr><td/><td>Class-based N-gram Model</td><td>58.4</td><td>38.8</td></tr><tr><td/><td>Class-based N-gram Model</td><td/><td/></tr><tr><td/><td>with Grammar-based N-gram</td><td/><td/></tr><tr><td/><td>Language Model</td><td>64.0</td><td>45.4</td></tr><tr><td>SwitchBoard</td><td>N-gram Model</td><td>43.5</td><td>25.0</td></tr><tr><td/><td>Language model trained on</td><td/><td/></tr><tr><td/><td>recognized in-domain data</td><td>55.7</td><td>36.3</td></tr><tr><td>Wide-coverage</td><td>N-gram Model</td><td>43.7</td><td>24.8</td></tr><tr><td>Grammar</td><td>Language model trained on</td><td/><td/></tr><tr><td/><td>recognized in-domain data</td><td>55.8</td><td>36.2</td></tr><tr><td colspan=\"4\">Table 2: Performance results for ASR Word and Sentence accuracy using models trained on data derived from different</td></tr><tr><td colspan=\"2\">methods of bootstrapping domain-specific data.</td><td/><td/></tr><tr><td colspan=\"2\">Indonesian), price categories (eg. moderately priced, ex-</td><td/><td/></tr><tr><td colspan=\"2\">pensive), and neighborhoods (eg. Upper East Side, Chi-</td><td/><td/></tr><tr><td>natown).</td><td/><td/><td/></tr><tr><td/><td/><td/><td>we defined</td></tr><tr><td/><td colspan=\"3\">different classes based on areas of interest (eg. riverside</td></tr><tr><td/><td colspan=\"3\">park, turtle pond), points of interest (eg. Ellis Island,</td></tr><tr><td/><td colspan=\"3\">United Nations Building), type of cuisine (eg. Afghani,</td></tr></table>",
"text": "",
"type_str": "table"
}
}
}
}