ACL-OCL / Base_JSON /prefixJ /json /J88 /J88-3012.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J88-3012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:56:19.157510Z"
},
"title": "DISCOURSE MODELS~ DIALOG MEMORIES~ AND USER MODELS",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Morik",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technical University Berlin",
"location": {
"country": "West Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "J88-3012",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "In this paper, we discuss some terminological issues related to the notions of discourse models, dialog memories, and user models. It is not our goal to show how discourse modeling and user modeling should actually interact in a cooperative system, but to show how the notions of discourse model, dialog memory, and user model can be defined and related in order to prevent misunderstandings and confusion. We argue that dialog memory may be subsumed under user model, as well as under discourse model, but that the three concepts should not be identified. Several separating criteria are discussed. We conclude that discourse modeling and user modeling are two lines of research that are orthogonal to each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "A dialog memory can be viewed as part of a user model, namely the part that represents the dialog-dependent knowledge of the user (Morik 1984) . Entries out of the dialog memory may cause entries in the user model, and entries of the user model may support the interpretation of an utterance, the interpretation then being stored in the dialog memory. However, in order to keep technical terms precise, user modeling on the one hand, and building and exploiting a dialog memory on the other hand should not be identified. This would lead to a reduction of what user modeling is about by disregarding all aspects other than dialog-dependent knowledge of the user as known to the system, while in fact there is some information that is to be covered by a user model and that may not be covered by a dialog memory.",
"cite_spans": [
{
"start": 130,
"end": 142,
"text": "(Morik 1984)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DIALOG MEMORY AS PART OF USER MODEL",
"sec_num": "2"
},
{
"text": "Let us think, for example, of a visit to the dentist's. The dentist will have some expectations concerning the client before the client said a word---even before he opened his mouth. This is due to the conversational setting, the roles of dentist and client. The same two persons meeting in another environment (e.g., at a political event, a horse race, or the opera) would not rely on the dentist-client expectations but on the information that then belongs to their roles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DIALOG MEMORY AS PART OF USER MODEL",
"sec_num": "2"
},
{
"text": "A user model contains explicit assumptions on the role of the user and the way a particular user plays it. The system exploits the user model systematically for playing its role more cooperatively by adopting to diverse users. To that end it uses rules which are parametrized according to the facets of the user. A user model is built up based on a \"naive psychology\", which forms a consistent image of the user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DIALOG MEMORY AS PART OF USER MODEL",
"sec_num": "2"
},
{
"text": "Schuster also states that the user model covers entities that do not belong into the dialog memory. In addition to the argument mentioned above (the dialog memory being the part of the user model that represents the dialog-dependent knowledge of the user), she points out that the dialog memory is used for building up parts of the user model and that both, user model and dialog memory, are used for generating an adequate answer. But if this were a valid argument for establishing a subsumption relation, we should also view the grammar as part of the user model, because the grammar is necessary for understanding and producing utterances. All the knowledge sources of a natural language system (hopefully) work together. We separate them conceptually not because of their independence, but because they contain different kinds of knowledge that contribute to the overall task in different ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DIALOG MEMORY AS PART OF USER MODEL",
"sec_num": "2"
},
{
"text": "A dialog memory contains all beliefs that can be inferred with certainty from utterances, so that they belong to the mutual belief space. For example, the objects and their properties introduced in a dialog are typical entries in a dialog memory. Also, presuppositions that can be inferred from articles or question particles belong into the dialog memory. The linguistic rules that determine the inferences are valid and binding for all conversational settings. General rules establish mutual beliefs on the basis of utterances. The dialog memory is then used for, e.g., determining the appro-Copyright 1988 by the Association for Computational Linguistics. Permission to copy without fee all or part of this material is granted provided that the copies are not made for direct commercial advantage and the CL reference and this copyright notice are included on the first page. To copy otherwise, or to republish, requires a fee and/or specific permission. 0362-613X/88/0100o-o$03.00 priate description (definite/indefinite), anaphoric expression, or characterization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DIALOG MEMORY AS PART OF USER MODEL",
"sec_num": "2"
},
{
"text": "Another notion that is very close to user model as well as to dialog memory is discourse model. Sometimes dialog memory and discourse model are treated as synonyms (e.g., Wahlster 1986 ). Given the above definition of dialog memories, however, there is a difference between the two notions. As opposed to Schuster, who defines a discourse model as \"containing representations of entities, along with their properties and relations they participate in\", which corresponds exactly to our dialog memory, I use discourse model according to the framework of Grosz and Sidner (1986) , where a discourse model is the syntactic structure of a dialog. One part of it, though, could be identified with the dialog memory, namely the focus space stack. The overall discourse model additionally represents the structure of the dialog with the segments and their relations, which is not part of the user model. Decomposing a dialog into segments and establishing relations between them does not depend on a particular conversational setting. As is the case with dialog memories the overall discourse model, too, is built up by general linguistic rules that need not be parametrized according to a certain user.",
"cite_spans": [
{
"start": 171,
"end": 184,
"text": "Wahlster 1986",
"ref_id": "BIBREF7"
},
{
"start": 553,
"end": 576,
"text": "Grosz and Sidner (1986)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DIALOG MEMORY AS PART OF DISCOURSE MODEL",
"sec_num": "3"
},
{
"text": "Previous attempts to separate user models from discourse models have used the short-time/long-time criterion, arguing that entries in the dialog memory can be forgotten after the end of the dialog, whereas entries in the user model are to be remembered. The same argument applies to dialog memories as part of the discourse model. The rationale of this argument is that anaphors are not applicable from one dialog to another and that the structure of a dialog is unlikely to be recalled as the syntactic structure of uttered sentences--just to mention these two phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SEPARATING CRITERIA",
"sec_num": "4"
},
{
"text": "But does that mean that the entities with all their properties and relations as communicated in the dialog are forgotten? What would be the reason to talk to each other then? How could we learn from each other? Knowledge is to a great extent transferred via dialogs. Second, how could speech acts have social obligations as a consequence that may well hold for a long time? (Think of promises, for example!) Although the speaker may have forgotten the dialog, the hearer has--by very general conventions of language use--the right to insist on the speaker's commitments (Lewis 1975 , Searle 1969 , Wunderlich 1972 .",
"cite_spans": [
{
"start": 570,
"end": 581,
"text": "(Lewis 1975",
"ref_id": "BIBREF2"
},
{
"start": 582,
"end": 595,
"text": ", Searle 1969",
"ref_id": "BIBREF6"
},
{
"start": 596,
"end": 613,
"text": ", Wunderlich 1972",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SEPARATING CRITERIA",
"sec_num": "4"
},
{
"text": "The synthesis of those seemingly conflicting observations is that the content of the dialog memory is integrated into the world knowledge. In other words, the conteilt of the focus space stack is partially incorporated into the world knowledge when it gets popped off the stack. So, the structure is lost, but the content is at least partly saved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SEPARATING CRITERIA",
"sec_num": "4"
},
{
"text": "Turniing things the other way around, why couldn't properties or character traits of a user be forgotten? What makes entries of a user model more stable? Think, for instance, of a post office clerk. Although he may adapt his behavior to the particular customer during the dialog, he normally forgets the information about her or him immediately after the dialog. As Rich (1979) pointed out, user models may be short term or long term. Thus short-time/long-time or forgettable/unforgettable is no criterion for dividing user models from dialog memories or discourse models.",
"cite_spans": [
{
"start": 366,
"end": 377,
"text": "Rich (1979)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SEPARATING CRITERIA",
"sec_num": "4"
},
{
"text": "Another criterion could be, whether the knowledge is used for generating the linguistic form (how to say something) or for establishing the content of a system's utterance (what to say). Clearly, dialog memory and overall discourse model deal with the linguistic structure of dialogs, e.g., the reference resolution and the appropriate verbalization of concepts. The user model, on the other hand, also covers information that directs the selection of what to utter. The user's level of expertise determines the particularity of a system utterance, the realization of notes of caution, and the word choice, for instance. The user's wants establish the context of the user utterances and guide the system's problem solving, thus keeping the system behavior directed towards the user goals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SEPARATING CRITERIA",
"sec_num": "4"
},
{
"text": "This distinction, however, is not clear cut either, for two reasons. First, the line between what to say and how to say it is rather fuzzy. Referencing a concept, for example, also involves choosing the appropriate attributes for characterization--and this is naturally a matter of what to say. Second, this criterion would exclude work as presented by Lehman and Carbonell (1988) from the area of user modeling. There, linguistic rules are specialized for a particular user in a particular conversational setting. This is clearly not a matter of the dialog memory, but of the user model, although it is concerned with the linguistic form. Thus the form/ content distinction does not separate user models from dialog memories and discourse models, either.",
"cite_spans": [
{
"start": 353,
"end": 380,
"text": "Lehman and Carbonell (1988)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SEPARATING CRITERIA",
"sec_num": "4"
},
{
"text": "The difficulty to find criteria separating between discourse models and user models indicates a case of cross-classification. The criteria, namely what is specific to a user and what concerns dialog structure naturally cross. Dialog memory falls into both categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SEPARATING CRITERIA",
"sec_num": "4"
},
{
"text": "On the one hand, from what the user utters his beliefs, his level of expertise in a certain domain, his wants, and his language style can be inferred. This knowledge can be used by all the system components: tuning syntactic analysis, resolving reference, determining the input speech act, disambiguating the input, selecting relevant information, organizing the text to be outputted (Paris 1988) , choosing the appropriate words, referencing, topicalizing, etc. In order to do so, all the components must include procedures that are parametrized according to the (particular) user model. Currently, the interactive systems do not put all the user facets to good use in all their components. This is not due to principled limits, however, but rather to a shortcoming in the state of the art.",
"cite_spans": [
{
"start": 384,
"end": 396,
"text": "(Paris 1988)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SEPARATING CRITERIA",
"sec_num": "4"
},
{
"text": "On the other hand, the user's utterances can also be analyzed from another viewpoint, namely incorporating them into a coherent discourse model as described by, e.g., Grosz and Sidner (1986) . Also, this model can be used during all processing steps from understanding to generating.",
"cite_spans": [
{
"start": 167,
"end": 190,
"text": "Grosz and Sidner (1986)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SEPARATING CRITERIA",
"sec_num": "4"
},
{
"text": "Both user models and discourse models are built up (at least partially) from the user utterances. Both contribute to a cooperative system behavior. But they do so from different viewpoints with different aims. Adopting to a particular user on the one hand and achieving a coherent, well-formed dialog on the other hand are two aims for a cooperative system which are orthogonal to each other. The terms user model and discourse model denote different aspects of a system. Thus although the notions are intensionally different, the extension of their respective definitions may overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SEPARATING CRITERIA",
"sec_num": "4"
},
{
"text": "Katharine MorikDiscourse Models, Dialog Memories, and User Models",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Computational Linguistics, Volume 14, Number 3, September 1988",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Attention, Intentions, and the Structure of Discourse",
"authors": [
{
"first": "B",
"middle": [],
"last": "Grosz",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Sidner",
"suffix": ""
}
],
"year": 1986,
"venue": "In Computational Linguistics",
"volume": "12",
"issue": "",
"pages": "175--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grosz, B. and Sidner, C. 1986 Attention, Intentions, and the Structure of Discourse. In Computational Linguistics 12: 175-204.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning the User's Language: A Step Towards Automated Creation of User Models",
"authors": [
{
"first": "J",
"middle": [
"F"
],
"last": "Lehman",
"suffix": ""
},
{
"first": "J",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
}
],
"year": 1988,
"venue": "User Models in Dialog Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lehman, J. F. and Carbonell, J. G. 1988 Learning the User's Lan- guage: A Step Towards Automated Creation of User Models. In Kobsa, A. and Wahlster, W. (eds.), User Models in Dialog Systems. Springer-Verlag, Berlin--New York.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Language, Mind and Knowledge, Minnesota Studies in the Philosophy of Science 7",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis, D. 1975 Languages and Language. In Gunderson, K. (ed.), Language, Mind and Knowledge, Minnesota Studies in the Phi- losophy of Science 7. University of Minnesota Press, Minneapolis, MN.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Partnermodellierung und Interessenprofile bei Dialogsystemen der Kfinstlichen Intelligenz",
"authors": [
{
"first": "K",
"middle": [],
"last": "Morik",
"suffix": ""
}
],
"year": 1984,
"venue": "Probleme des (Text-) Verstehens: Ansdtze der Kiinstlichen lntelligenz. Niemeyer",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morik, K. 1984 Partnermodellierung und Interessenprofile bei Dia- logsystemen der Kfinstlichen Intelligenz. In Rollinger, C. R. (ed.), Probleme des (Text-) Verstehens: Ansdtze der Kiinstlichen lntel- ligenz. Niemeyer, Tfibingen, W. Germany.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Tailoring Object Descriptions to a User's Level of Expertise",
"authors": [
{
"first": "C",
"middle": [
"L"
],
"last": "Paris",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paris, C. L. 1988 Tailoring Object Descriptions to a User's Level of Expertise. In Kobsa, A. and Wahlster, W. (eds.), User Models in Dialog Systems. Springer-Verlag, Berlin--New York.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Building and Exploiting User Models",
"authors": [
{
"first": "E",
"middle": [],
"last": "Rich",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich, E. 1979 Building and Exploiting User Models. Ph.D. thesis, Department of Computer Science, Carnegie-Mellon University, Pittsburgh, PA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Speech Acts",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Searle",
"suffix": ""
}
],
"year": 1969,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Searle, J. R. 1969 Speech Acts. Cambridge University Press, Cam- bridge, England.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Some Terminological Remarks on User Modeling. Paper presented at the International Workshop on User Modeling",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wahlster",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wahlster, W. 1986 Some Terminological Remarks on User Modeling. Paper presented at the International Workshop on User Modeling, Maria Laach, W. Germany.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sprechakte",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wunderlich",
"suffix": ""
}
],
"year": 1972,
"venue": "Pragmatik und sprachliches Handeln",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wunderlich, D. 1972 Sprechakte. In Maas, U. and Wunderlich, D. (eds.), Pragmatik und sprachliches Handeln, Athen/ium Verlag, Frankfurt, W. Germany.",
"links": null
}
},
"ref_entries": {}
}
}