ACL-OCL / Base_JSON /prefixW /json /W00 /W00-0308.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W00-0308",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:33:34.921180Z"
},
"title": "Epiphenomenal Grammar Acquisition with GG SG",
"authors": [
{
"first": "Marsal",
"middle": [],
"last": "Gavaldh",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "As a step toward conversational systems that allow for a more natural human-computer interaction, we rep6r~ on GSG, a system that, while providing a natural-l~nguage interface to a variety of applications, engages in clarification dialogues with the end user through which new semantic mappings are dynamically acquired. GsG exploits task-and language-dependent information but is fully taskand language-independent in its architecture and strategies.",
"pdf_parse": {
"paper_id": "W00-0308",
"_pdf_hash": "",
"abstract": [
{
"text": "As a step toward conversational systems that allow for a more natural human-computer interaction, we rep6r~ on GSG, a system that, while providing a natural-l~nguage interface to a variety of applications, engages in clarification dialogues with the end user through which new semantic mappings are dynamically acquired. GsG exploits task-and language-dependent information but is fully taskand language-independent in its architecture and strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As conversational systems move from the realm of science fiction and research labs into people's everyday life, and as they evolve from the plain, systemdirected interactions ~ la press or say one of socalled interactive voice response systems based on isolated-word recognizers and fixed-menu navigation, to the more open, mixed-initiative dialogues carried out in spoken dialogue systems based on large-vocabulary continuous speech recognizers and flexible dialogue managers (see, e.g., (Allen et al., 1996; Denecke, 1997; Walker et al., 1998; Rudnicky et al., 1999; Zue et al., 2000) ), the overall experiential quality of the human-computer interaction becomes increasingly important. That is, beyond the obvious factors of speech recognition accuracy and speech synthesis naturalness, the most critical challenge is that of providing conversational interactions that feel natural to human users (cf. (Glass, 1999) ). This, we believe, mainly translates into building systems that possess some degree of linguistic, reasoning, and learning abilities.",
"cite_spans": [
{
"start": 489,
"end": 509,
"text": "(Allen et al., 1996;",
"ref_id": "BIBREF0"
},
{
"start": 510,
"end": 524,
"text": "Denecke, 1997;",
"ref_id": "BIBREF2"
},
{
"start": 525,
"end": 545,
"text": "Walker et al., 1998;",
"ref_id": "BIBREF7"
},
{
"start": 546,
"end": 568,
"text": "Rudnicky et al., 1999;",
"ref_id": "BIBREF6"
},
{
"start": 569,
"end": 586,
"text": "Zue et al., 2000)",
"ref_id": "BIBREF8"
},
{
"start": 905,
"end": 918,
"text": "(Glass, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we report on GSG, a conversational system that partially addresses these issues by being able to dynamically extend its linguistic knowledge through simple, natural-language only interactions with non-expert users: On a purely on-need basis, i.e., when the system does not understand what the user means, GSG makes educated guesses, poses confirmation and clarification questions, and learns new semantic mappings from the answers given by the users, as well as from other linguistic information that they may volunteer. GSG provides, therefore, an extremely robust interface, and, at the same time, significantly reduces grammar development time because the original grammar, while complete with respect to the semantic representation of the domain at hand, need only cover a small portion of the surface variability, since it will be automatically extended as an epip~enomenon of engaging in clarification dialogues with end users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As sketched in Figure 1 , GSG is a conversational 1 system built around the Soup parser (Gavald~, 2000 ).",
"cite_spans": [
{
"start": 88,
"end": 102,
"text": "(Gavald~, 2000",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Brief System Description",
"sec_num": "2"
},
{
"text": "GSG's principal (and possibly sole) knowledge source is a task-dependent, semantic context-free grammar (the Kernel Grammar). At run-time, the Grammar is initialized as the union of the Kernel Grammar and, possibly, the User Grammar A (userdependent rules learned in previous sessions). The Grammar gives rise to the 0nto]ogy and to a parsebank (collection of parse trees), which, together with a possible Kernel Parsebank, becomes the Parsebank, from which the statistical Prediction Models are trained. The Ontology is a directed acyelic graph automatically derived from the Grammar in which the nodes correspond to grammar nonterminals (NTs) and the arcs record immediate dominance relation, i.e., the presence of, say, NTi in a right-hand side (RHS) alternative of NTj will result in an arc from NTi to NTj. Nodes are annotated as being \"Principal\" vs. \"Auxiliary\" (via naming convention), \"Top-level\" vs. \"Non-top level\" (i.e., whether they are starting symbols of the grammar), and with having \"Only NT daughters\" vs. \"Only T daughters\" vs. \"Mixed\"; arcs are annotated as being \"Is-a\" (estimated from being the only non-optional NT in a RHS alternative) vs. \"Expresses\" links, \"Always-required\" vs. \"Alwaysoptional\" vs. \"Mixed,\" and \"Never-repeatable\" vs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brief System Description",
"sec_num": "2"
},
{
"text": "ZIn the work reported here, GsG's interactions are textbased (keyboard as input, text window as output), but GsG is being integrated with both a speech recognizer and a speech synthesizer. \"Always-repeatable\" vs. \"Mixed\". Also, a topological sort 2 on the nodes is computed to derive a general-to-specific partial order of the NTs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brief System Description",
"sec_num": "2"
},
{
"text": "A full system description is beyond the scope of this paper, but, very briefly, the User Interface mediates all interactions with the end-user, the stackbased Dialogue Manager keeps track of current and past utterances and ensuing clarification dialogues, and, together with the History Interaction, ensures that no answered question is asked again. The GSG Engine manages the core of the systems' \"intelligence,\" namely hypothesizing interpretations (together with the Parse Tree Builder) and on-line learning of semantic mappings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brief System Description",
"sec_num": "2"
},
{
"text": "To illustrate the workings of GSG, let's analyze an example interaction in an e-mail client task. Figure 2 shows the example dialogue, Figure 3 presents a relevant fragment of the semantic context-free grammar 3 used to analyze the input, and Table 1 above 2Requires that the grammar be acyclic. 3Containing, approximately, 300 NTs, 500 Ts, and 800 RHS alternatives, out of which about 55% is dedicated to lists the main prediction and learning strategies employed.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 106,
"text": "Figure 2",
"ref_id": null
},
{
"start": 135,
"end": 143,
"text": "Figure 3",
"ref_id": null
},
{
"start": 243,
"end": 256,
"text": "Table 1 above",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example Dialogue",
"sec_num": "3"
},
{
"text": "The user inputs (U1) and, as it turns out, neither retrieve, nor bob, nor christmas are in the grammar and no full parse of (U1) can be found. Rather than responding with a I don't understand --please rephrase, GSG sets its predictive machinery in motion with the goal of constructing a full interpretation for (U1). 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Dialogue",
"sec_num": "3"
},
{
"text": "The first step is to reparse (U1) in a mode in which all NTs are considered top-level, i.e., able to slmnd at the root Of a parse tree. This produces a set of still unparsed words, such as retrieve, and parse subtrees, such as from bob being parsed under SENDER (via grammar rules (R6) to (R10)) and, ambiguously, under [sourceFolder] as well (via grammar rules (Rll) and (R12)). All of it (i.e., the full content of the chart) is taken as evidence by the Prediction Models to postulate the overall meaning of the original utterance. In this case (see U2cover temporal and numeric expressions, and digits and letters.",
"cite_spans": [
{
"start": 320,
"end": 334,
"text": "[sourceFolder]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example Dialogue",
"sec_num": "3"
},
{
"text": "4The hypothesized interpretation assumes that a single parse tree will cover the entire problematic sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Dialogue",
"sec_num": "3"
},
{
"text": "(U9) (UIO) (Ull) (U12) (U1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Dialogue",
"sec_num": "3"
},
{
"text": "U: retrieve all messages from bob that were sent after christmas (U2) S: I don't understand right away what you mean but let me guess... \"retrieve all messages from bob that were sent after christmas\" is a way to express: 1. send mail 2. reply mail 3. read mall 0. None of the above (U3) U: none (U4) S: \"retrieve all messages from bob that were sent after christmas\" is a way to express: 1. move mail 2. list mail 0. None of the above (U5) U: list (U6) S: By \"from bob\" do you mean: 1. source folder 2. sender (UT) U: sender (US) S \" 2christmas\" is a way to express: -1. yesterday (date relative yesterday) 2. tomorrow (date relative tomorrow) 3. today (date relative today) 0. None of the above U: by christmas i'm referring to december twenty fifth S: Thanks for teaching me the meaning of \"christmas\"! Thanks for teaching me the meaning of \"retrieve all messages from bob that were sent after christmas\"! I understand \"retrieve all messages from bob that were sent after december twenty fifth\" U: retrieve last email to mary before christmas S: I understand \"retrieve last email to mary before december twenty fifth\" Figure 2 : Example dialogue between a user (U) to (U5) 5) the suggestions of the Prediction Models are not particularly accurate (the correct choice is presented only in fifth place), but, considering that the head verb (retrieve) is not even in the grammar, such a response to (U1) is definitely better than giving up. The effect of (U5) is to select [listMail] as (U1)'s \"anchor mother\" (logical root of the overall interpretation). But to complete the parse tree a few details still need to be filled in. To that effect (U6) is generated to disambiguate ~rorn bob and (US) to find the right mapping for christmas. The reasoning behind the rather puzzling choices offered by (US) comes from applying the Parser Predictions strategy: given the context in which an unparsed sequence (in this case, single word) christmas appears, i.e., the subtree DATE..AFTER..PRE covering after (via (Pal4)), the grammar is traversed to find likely continuations of the context (left context only in this case). Since DATE.AFTER_PRE can be immediately followed by [datehfter] (see (R13)) that makes [datehfter] a candidate to cover the unparsed se-5The options presented in (U2) and (U4) are generated at the same time, the only reason why they are split is to prevent overwhelming the end user, who may be hearing the choices spoken over the telephone. Also, note that in (U3) the user could have also said zero, or none of the above and achieve the same result --or, alternatively, they could have volunteered information as in (ug). 7Obviously \"the meaning of Christmas\" (cf. cheerful (U10)) may be much more profound than a shorthand for December 25 --but, alas, conveying that is well beyond the simple grammar presented here.",
"cite_spans": [
{
"start": 1473,
"end": 1483,
"text": "[listMail]",
"ref_id": null
},
{
"start": 2170,
"end": 2181,
"text": "[datehfter]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1121,
"end": 1129,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example Dialogue",
"sec_num": "3"
},
{
"text": "SA \"verbness\" ratio is automatically computed for each candidate NT. by running the POS tagger on automatically generated sentences from the NTs in question. Figure 3 : Grammar fragment for an e-mail client task. '*' indicates optiSnality of adjacent token, '+' repeatability, and '1' separates RHS alternatives. Terminals are italicized. NILDCARD is a special NT that matches any out-of-vocabulary word or any in-vocabulary word present in a list for that purpose.",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 166,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example Dialogue",
"sec_num": "3"
},
{
"text": "[listMail]. The result was highly positive and led to the acquisition of the RHS alternative (M1). It is worth mentioning here that there are two kinds of mappings that GSG learns: RHS alternatives and subtree mappings. Learning new RHS alternatives is the preferred way because the knowledge can be incorporated into the Parsebank (and, in turn, into the Prediction Models). That is the effect of adding (M1) to the Grammar: Since the Parsebank and the Prediction Models are updated on-line, the presence of the word retrieve in subsequent utterances becomes a strong indicator of LIST and, associatively, of [listMa\u00b1l] . However, when the source expression can not be mapped into the desired target structure via grammar rules, as in (M2), the only solution is to remember the equivalence. This kind of learning, although definitely useful since the meaning of the source expression will be henceforth remembered, cannot be incorporated into the Prediction Models.",
"cite_spans": [
{
"start": 582,
"end": 620,
"text": "LIST and, associatively, of [listMa\u00b1l]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example Dialogue",
"sec_num": "3"
},
{
"text": "Right after (U9), (U1) is considered fully understood and the interpretation is automatically mapped into the feature structure (FS1) 9 in Figure 5 , which is then shipped to the Back-end Application Manager.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 147,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example Dialogue",
"sec_num": "3"
},
{
"text": "Finally, when (Ull) comes in, a correct analysis is produced thanks to the mappings just learned from (U1), 1\u00b0 and (FS2) in is generated. modified version of Brill's tagger (Brill 1994).) 9The mapping is simply a removal of auxiliary NTs from the parse tree, plus value extraction of dates, numbers and strings from certain subtrees, e.g., subtree in (M2) becomes the substructure tinder da~ePoin~ in (FSI).",
"cite_spans": [
{
"start": 173,
"end": 187,
"text": "(Brill 1994).)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example Dialogue",
"sec_num": "3"
},
{
"text": "1\u00b0Note that rule [listMail] ~ LIST +MAIL_ARGUMENTS",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Dialogue",
"sec_num": "3"
},
{
"text": "The example above illustrates the philosophy of GSG, n namely, to exploit task and linguistic knowledge to pose clarification questions in the face of incomplete analyses, 12 build correct interpretations, and acquire new semantic mappings. Thus, a contribution of Gso, is the demonstration that from a simple context-free grammar, with a very lightweight formalism, one can extract enough information (Ontology, Parsebank, Parser Predictions strategy) to conduct meaningful clarification dialogues. Note, moreover, that such dialogues occur entirely within GSG, with the Back-end Application Manager receiving only finalized feature structures. 13 Another advantage is the ease with which naturallanguage interfaces can be constructed for new domains: Since all the task and linguistic knowledge is extracted from the grammar, 14 one need only develop a Kernel Grammar that models the domain at (extracted from the final interpretation of (U1)) would have been learned too, but its subsumption by existing rule (R1) was automatically detected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "nBased on the pioneering work of (Lehman, 1989) . 12Detected by a lack of interpretation, excessively fragmented interpretation, or by being told by the end user that the automatically generated paraphrase of their input is not what they meant.",
"cite_spans": [
{
"start": 33,
"end": 47,
"text": "(Lehman, 1989)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "lsOf course, prediction accuracy can improve if the Backend Application Manager can be incorporated as a knowledge source to, for example, contribute in the ranking of hypotheses, but the point is that it is not necessary and that, as long as the capabilities of the back-end application are adequately modeled by the Grammar, the construction of the correct interpretation can be performed within Gsc alone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "t4Except for the POS Tagger and the Syntactic Grammar. hand via its NTs 15 but need not provide a high coverage of the utterances possible in the domain (data which may not be available anyway). Also, reuse of existing grammar modules for, e.g., dates and numbers, is straightforward. However, a fear of letting the end user (indirectly) modify a grammar is that the grammar may grow untamed and become filled with new rules that disrupt the Kernel Grammar. To prevent that, besides the careful construction of interpretations via the strategies described above, GSG employs two safety mechanisms: before a rule is added to the grammar, it is checked whether it introduces ambiguity to the grammar, 16 and whether it disrupts existing 15Knowledge of, e.g., how the Ontology is computed helps, but it coincides with the most natural way of writing wellstructured, context-free semantic grammars.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "lSAccomplished by using the SouP parser in yet another mode: parsing of RHSs (expanded to RHS paths) instead of terminals. In this case, existence of a parse tree covering an entire RHS path indicates ambiguity. Note that if all RHS paths of the new rule can be parsed under the current RHS of the new rule's left-hand side, then the new rule is subsumed by the existing RHS and can therefore be discarded (cf. note Io).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "(correct) interpretations. 17 In this way, some of the new rules may have to be discarded, but at least the health of the grammar is preserved, is Another concern may be that the new mappings end up generating feature structures that are not understood by the Back-end Application Manager. To avoid that, GSG only allows a principal NT to be dominated by another principal NT if such dominance relation is licensed by the Kernel Grammar. This guarantees that all resulting feature structures be structurally correct (although they may contain unexpected atomic values).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "A current limitation of GsG lies in the difficulty of segmenting long sequences of unparsed words: GSG uses POS tagging followed by noun-phrase bracketing (via parsing with a shallow Syntactic Grammar), which represents an improvement over the Single Segment Assumption (cf. (Lehman, 1989) ), but is still far from perfect and can disrupt the ensuing clarification dialogue. Also, the number of questions that the system can pose as it builds an interpre-17Achieved by reparsing (a subset of) the Parsebank. Note that SOUP can typically parse in the order of 100 utterances per second (cf. (Gavaldb. 2000) ).",
"cite_spans": [
{
"start": 275,
"end": 289,
"text": "(Lehman, 1989)",
"ref_id": "BIBREF5"
},
{
"start": 585,
"end": 605,
"text": "(cf. (Gavaldb. 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "ISAssuming minimally co6perative and consistent users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "tation, may, in occasion, exceed the patience of the end user (but the command cancel is always understood).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "The hardest problem we have encountered so far is typical of natural-language interfaces but is exacerbated in GSG (as it treats every unparsable sentence as an opportunity to learn), and that is the difficulty of identifiying in-domain end-user sentences that go beyond the capabilities of the end application, or, in other words, are not expressible in the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Finally, as Gsc becomes fully integrated with a speech recognizer, it remains to be seen how an optimal point in the tradeoff between the wide coverage but relatively low word recognition accuracy obtained with a loose dictation grammar, and the narrow coverage but high word accuracy achieved with a tight, task-dependent grammar, can be found, and how the degradations of the input is going to affect GSG'S behavior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Overall, however, we believe that Gsc, by virtue of its built-in robustness, minimal initial knowledge requirements, and learning abilities, begins to embody the kind of qualities that are necessary for conversational systems, if they are to provide, without exorbitant development effort, an interaction thay feels truly natural to humans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Robust Understanding in a Dialogue System",
"authors": [
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings o] A CL-1996",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allen, James, et al. (1996). Robust Understanding in a Dialogue System. In Proceedings o] A CL- 1996.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Some Advances in Part of Speech Tagging",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings o] AAAI-1994",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brill, Eric. (1994). Some Advances in Part of Speech Tagging. In Proceedings o] AAAI-1994.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An Informationbased Approach for Guiding Multi-modal Human-Computer Interaction",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Denecke",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of IJCAI-199Z",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denecke, Matthias. (1997). An Information- based Approach for Guiding Multi-modal Human- Computer Interaction. In .Proceedings of IJCAI- 199Z",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "SouP: A Parser for Realworld Spontaneous Speech",
"authors": [
{
"first": "Marsal",
"middle": [],
"last": "Gavald~",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings o] the Sixth International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gavald~, Marsal. (2000). SouP: A Parser for Real- world Spontaneous Speech. In Proceedings o] the Sixth International Workshop on Parsing Tech- nologies (IWPT-2000).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Challenges for Spoken Dialogue Systems",
"authors": [
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 1999 IEEE ASRU Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glass, James. (1999). Challenges for Spoken Dia- logue Systems. In Proceedings of the 1999 IEEE ASRU Workshop.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adaptive Parsing: Sellextending Natural Language Interfaces",
"authors": [
{
"first": "Jill",
"middle": [],
"last": "Lehman",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lehman, Jill. (1989). Adaptive Parsing: Sell- extending Natural Language Interfaces. Ph.D. dis- sertation, School of Computer Science, Carnegie Mellon University.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Creating Natural Dialogs in the Carnegie Mellon COMMUNICATOR System",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Rudnicky",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings o] Eurospeech-1999",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rudnicky, Alex, et al. (1999). Creating Natural Dialogs in the Carnegie Mellon COMMUNICATOR System. In Proceedings o] Eurospeech-1999.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING/A CL-i998",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Walker, Marilyn, et al. (1998). Learning Optimal Dialogue Strategies: A Case Study of a Spo- ken Dialogue Agent for Email. In Proceedings of COLING/A CL-i998.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "JUPITER: A Telephone-Based Conversational Interface for Weather Information",
"authors": [
{
"first": "",
"middle": [],
"last": "Zue",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Victor",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zue, Victor, et al. (2000). JUPITER: A Telephone- Based Conversational Interface for Weather Infor- mation. In IEEE Transactions on Speech and Au- dio Processing, Vol. 8 , No. 1.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "GsG's system diagram. Ovals enclose knowledge sources, rectangles modules, and arrows indicate information flow. Dashed components are optional.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Mappings learned from the dialogue inFigure 2. Feature structures sent to the Back-end Application Manager after (U10) and (U12) inFigure 2.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": "and the system (S) on an e-mall client task. quence christmas. However, since, according to the Ontology, [dateAfter] does not allow terminals as immediate daughters, a search is performed to find NTs under [dateAfter] that permit it. In this case (via (R15) to (R19)) it suggests yesterday, tomorrow, etc. 6 The user, though, realizing that the system does not directly understand christmas, volunteers (ug) 7, from which the mapping (M2) inFigure 4is learned.At this point one may wonder about the fate of the unparsed word retrieve, since no question was asked about it. The answer is that GsG need not ask about every single prediction, if the confidence value is high enough. In this case, as soon as[listgail] was established (in (U5)) as the anchor mother, a Verbal Head Search strategy was launched to see whether, among the unparsed words, a verb was found that could be placed in a mostly-verb NT s directly under ~In fact it suggests [DATE_RELATIVE:yesterday], [DATE_RELATIVE:tomorrow], etc, but it presents an example automatically generated from such NTs.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}