| { |
| "paper_id": "2005", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:48:58.159663Z" |
| }, |
| "title": "Evaluation of a Dialogue System in an Automotive Environment", |
| "authors": [ |
| { |
| "first": "Liza", |
| "middle": [], |
| "last": "Hassel", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Ludwig Maximilian University Munich", |
| "location": {} |
| }, |
| "email": "liza.hassel@bmw.de" |
| }, |
| { |
| "first": "Eli", |
| "middle": [], |
| "last": "Hagen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Ludwig Maximilian University Munich", |
| "location": {} |
| }, |
| "email": "eli.hagen@bmw.de" |
| }, |
| { |
| "first": "Innovationszentrum", |
| "middle": [], |
| "last": "Bmw", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Ludwig Maximilian University Munich", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "G" |
| ], |
| "last": "Munich", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Ludwig Maximilian University Munich", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper we discuss features to enhance the usability of a spoken dialogue system (SDS) in an automotive environment. We describe the tests that were performed to evaluate those features, and the methods used to assess the test results. One of these methods is a modification of PARADISE, a framework for evaluating the performance of SDSs (Walker et al., 1998). We discuss its drawbacks for the evaluation of SDSs like ours, the modifications we have carried out, and the test results.", |
| "pdf_parse": { |
| "paper_id": "2005", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper we discuss features to enhance the usability of a spoken dialogue system (SDS) in an automotive environment. We describe the tests that were performed to evaluate those features, and the methods used to assess the test results. One of these methods is a modification of PARADISE, a framework for evaluating the performance of SDSs (Walker et al., 1998). We discuss its drawbacks for the evaluation of SDSs like ours, the modifications we have carried out, and the test results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "SDSs for operating devices are still a small group in the class of dialogue systems. But, as available applications and research in this area show, there is a growing demand for such interfaces. They are being used in the mechanical CAD industry (think3, 2000) and in smart home systems (Fellbaum and Hampicke, 2002) . The operation of complicated devices by voice, e.g. programming a video-tape recorder, was surveyed by Aretoulaki and Ludwig (1998) . Our domain is the operation of devices like radio, navigation, and telephone while driving a car (Haller (2003) ; Hagen et al. (2004) ).", |
| "cite_spans": [ |
| { |
| "start": 246, |
| "end": 260, |
| "text": "(think3, 2000)", |
| "ref_id": null |
| }, |
| { |
| "start": 287, |
| "end": 316, |
| "text": "(Fellbaum and Hampicke, 2002)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 422, |
| "end": 450, |
| "text": "Aretoulaki and Ludwig (1998)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 550, |
| "end": 564, |
| "text": "(Haller (2003)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 567, |
| "end": 586, |
| "text": "Hagen et al. (2004)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We proposed features to enhance the usability of the system. These improvements are based on guidelines for SDSs in general, e.g. on the work of Dix et al. (1995) , and on principles for in-car SDSs, e.g. Dybkjaer et al. (1997) and the guideline ETSI EG 202 116 V 1.2.1 (2002, section 8.7 .3). These features were implemented in a prototype and surveyed in two test series, one with a reference system (Hagen et al., 2004) and the other with the prototype. The importance of testing in a real environment was pointed out by Bernsen and Dybkjaer (2001) . Thus evaluation of the prototype took place driving in real traffic.", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 162, |
| "text": "Dix et al. (1995)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 205, |
| "end": 227, |
| "text": "Dybkjaer et al. (1997)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 246, |
| "end": 288, |
| "text": "ETSI EG 202 116 V 1.2.1 (2002, section 8.7", |
| "ref_id": null |
| }, |
| { |
| "start": 402, |
| "end": 422, |
| "text": "(Hagen et al., 2004)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 524, |
| "end": 551, |
| "text": "Bernsen and Dybkjaer (2001)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A frequently mentioned framework to evaluate SDSs is PARADISE (Walker et al., 1998) . It seeks to predict system performance (described in terms of the user satisfaction) employing multiple regression analysis using a task success metric based on the Kappa value, \u03ba (Carletta, 1996) , and dialogue costs as independent variables. We revised \u03ba for our system because it was developed on the basis of information dialogues with a well defined set of task attributes, what is not the case for the dialogues we evaluated. Despite this modification, we found no relationship between task success and dialogue costs, and user satisfaction. We discuss the reasons for this issue.", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 83, |
| "text": "(Walker et al., 1998)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 251, |
| "end": 282, |
| "text": "Kappa value, \u03ba (Carletta, 1996)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In section 2 we describe the SDS, and section 3 explains the features. In section 4 and 5 we describe the test design and the methods used to evaluate the tests. In sections 6 and 7 we present the findings of the evaluations. Section 8 summarizes these results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our speech interfaces were implemented as part of BMW's iDrive system (Haller, 2003) . In addition to speech, iDrive has a manual-visual interface with a central input device in the centre console (controller, fig. 1 ) and central display in the centre column ( fig. 2) . When users operate the controller (turn left and right, push in four directions and press down), they receive visual feedback on the display. Over the speech channel, users can operate functions in the areas entertainment, communication and navigation. Users activate the speech recognizer with a push-to-talk (PTT) button on the steering wheel or in the middle console near the controller. The dialogue style is command Figure 2 : Display Control and control as illustrated in table 1. The iDrive SDS is currently configured for about 3000 words and phrases. iDrive with speech is available in several languages. For our experiments, we used the German version. For further information, see Hagen et al. (2004) .", |
| "cite_spans": [ |
| { |
| "start": 70, |
| "end": 84, |
| "text": "(Haller, 2003)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 964, |
| "end": 983, |
| "text": "Hagen et al. (2004)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 210, |
| "end": 216, |
| "text": "fig. 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 262, |
| "end": 269, |
| "text": "fig. 2)", |
| "ref_id": null |
| }, |
| { |
| "start": 693, |
| "end": 701, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "System Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Usability is a multidimensional property of a user interface. The definition we use is based on Nielsen (1993) . There, five usability dimensions are mentioned: Learnability, Efficiency, Memorability, Error, and Satisfaction. According to Nielsen (1993) , a system fulfills the demands of usability when it is easy to learn (U-1), efficient to use (after the learning phase, U-2), easy to be remembered (U-3), when it allows an easy recovery from errors (U-4), and it is pleasant ot use (U-5).", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 110, |
| "text": "Nielsen (1993)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 239, |
| "end": 253, |
| "text": "Nielsen (1993)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features for Enhancing Usability", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We aim at enhancing the usability of the system. The features discussed below help the SDS to conform to these requirements. We have classified the features according to the degree of control users have over them in implicit (I, section 3.1) and explicit (E, section 3.2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features for Enhancing Usability", |
| "sec_num": "3" |
| }, |
| { |
| "text": "With the help of the implicit features, the system adapts to the users' behavior (I-1, I-2, and I-3) and provides means to facilitate its use (I-2 and I-4).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implicit Features", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "I-1: The system prompts are adapted to the expertise of the users. For novices, the SDS mentions the available voice commands (options) without waiting for users to ask. Experts have to explicitly ask for options (table 1) . This feature is part of the adaptation concept described in (Hassel and Hagen, 2005) . Feature I-1 makes the system easy to learn (U-1). It also improves the interaction efficiency once users have learned how to use it (U-2), because the reduced prompts save time. And it makes the system more pleasant to use (U-5), because novices, due to the informative prompts, do not feel lost, and experts are not annoyed by long and repetitive prompts.", |
| "cite_spans": [ |
| { |
| "start": 285, |
| "end": 309, |
| "text": "(Hassel and Hagen, 2005)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 213, |
| "end": 222, |
| "text": "(table 1)", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Implicit Features", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "I-2: Certain tasks are more efficiently executed with a voice command than with the controller and GUI. In such cases, the system takes the initiative and suggests to switch modality. I-2 improves the learnability of the iDrive (U-1) because it tells users which modality is the more appropriate to complete the current task, with controller or by voice. Feature I-2 was only available in the prototype. The experiments we carried out were restricted to the SDS. Therefore, we could not test this multimodal feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implicit Features", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "I-3: Timeouts and ASR-failures cannot be completely avoided. Timeouts occur in most cases because drivers are distracted by the traffic environment or because they do not know what to say next. After the first timeout, the system repeats the prompt to catch the attention of the driver. After the second successive timeout, the system prompts the currently available options. Due to the limited limited vocabulary, ASR-failures because of OOV mistakes can happen. If the system does not understand users after two tries the system prompt is changed to contain the currently available options. Feature I-3 makes it easy for users to learn the system (U-1) and to recover from errors (U-4).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implicit Features", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "I-4: The Speak What You See principle means that users are able to use the words or phrases labelling tasks appearing on the GUI as voice commands. This principle diminish the users' need for remembering the commands (U-3) because they can look at the GUI to recall the available voice commands.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implicit Features", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "With the help of the explicit features, users can actively control what and when they learn. These features guarantee that users keep control over the system, they are in charge of the information they get from the SDSs. E-1: With the \"help\" command users can learn about the general characteristics of the system: how to get a list with voice commands, how to get to the main menu, etc. It facilitates users to understand the system (U-1). After \"options\" (E-2) the system prompts the currently available voice commands. The effect of this command is context sensitive. Feature E-2 facilitates novices to learn the system (U-1) and experts to learn about tasks they seldom use (U-2). Both E-1 and E-2 makes the system more pleasant to use (U-5) because drivers do not need to look in the printed manuals for advice. E-3: Users can ask the system to suggest them a faster way to achieve the actual task. The system looks for shortcuts to achieve one of the last dialogue states and suggests it to the user. Feature E-3 allows users to learn more efficient ways to use the SDS (U-2). E-4: The \"back\" command has a similar effect as the back button of a browser. During the first test series (reference system) some users tried to recover from misunderstandings using the command \"back\". Users expecting the command to be available were astonished and confused about its absence. E-4 allows users an easy recovery from errors (U-4), thus facilitating the learning by <presses PTT button> user: <presses PTT button> system: Speech input <beep> system: <beep> user:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explicit Features", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Entertainment. (user: Entertainment.) system: Entertainment. Say 'FM menu', 'AM menu', or 'CD menu'.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explicit Features", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "(system: Entertainment.) user:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explicit Features", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "FM menu. (user: FM menu.) system: FM menu. Say 'choose frequency', 'choose station', ... . E-5: The \"up\" command allows users to navigate upwards in the GUI. Other than \"back\", \"up\" does not undo user instructions. It only moves the focus from one layer to the one above. Using it, users can recover from misunderstandings (U-4), and abbreviate the interaction (U-2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explicit Features", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The impact of each feature on usability is presented and dicussed in section 7. The evaluation showed that the proposed features do contribute to enhance the usability of the SDS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explicit Features", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The prototype described in section 2 was evaluated against a reference system with the same functionality and the same GUI (Hagen et al., 2004) . Two test series were carried out. For series A, a BMW 5 Series was equipped with a reference system. Series B with the prototype took place in a BMW 7 Series. A total of 44 subjects participated in the tests. The tests consisted of two parts, a driving part (duration: between 30 and 45 min) and a questionnaire. During the driving part the subjects were asked to complete eleven representative tasks (table 3) . Tasks 1 and 2 were repeated at the end of the test (tasks 10 and 11) to test the adaptation of the system and the learning progress of the participants: Could they achieve the task more efficiently? Did they already develop an operating strategy during the test time?", |
| "cite_spans": [ |
| { |
| "start": 123, |
| "end": 143, |
| "text": "(Hagen et al., 2004)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 547, |
| "end": 556, |
| "text": "(table 3)", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Test Design", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In addition to completing the tasks while driving, users were told to verbalise their thoughts as they used the system. The thinking-aloud method is described by Nielsen (1993) . After finishing the driving part, the test participants had to answer a five-page questionnaire.", |
| "cite_spans": [ |
| { |
| "start": 162, |
| "end": 176, |
| "text": "Nielsen (1993)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Test Design", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To assess the test results we intended to use the evaluation framework PARADISE (Walker et al., 1998) . In the last years, PARADISE was often surveyed (Whittaker et al. (2000) ; Paek (2001) ; Larsen (2003b) ; Aguilera and et al. (2004) ). The main limitation was found to be that tasks have to be clearly defined so that they can be described by an attribute-value-matrix (AVM). Further, it was critized that PARADISE was designed to evaluate only unimodal systems. And lastly, the assumption of a linear relationship between user satisfaction and subjective measures was called into question.", |
| "cite_spans": [ |
| { |
| "start": 80, |
| "end": 101, |
| "text": "(Walker et al., 1998)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 151, |
| "end": 175, |
| "text": "(Whittaker et al. (2000)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 178, |
| "end": 189, |
| "text": "Paek (2001)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 192, |
| "end": 206, |
| "text": "Larsen (2003b)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 209, |
| "end": 235, |
| "text": "Aguilera and et al. (2004)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Method", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Attempts have been made to revise PARADISE. Hjalmarsson (2002) propose a new task definition for the evaluation of multimodal systems with non-AVMdescribable tasks. We could not apply this method because they evaluated SDSs for information exchange and the task success was calculated in terms of information bits. Beringer et al. (2002) also introduce a new task success measure to evaluate multimodal systems. They rate tasks as successful or not, but since we wanted to know how well users coped with the tasks, we also discarded this method. In the next sections we describe the changes we carried out to PARADISE in order to apply it to our system.", |
| "cite_spans": [ |
| { |
| "start": 44, |
| "end": 62, |
| "text": "Hjalmarsson (2002)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 315, |
| "end": 337, |
| "text": "Beringer et al. (2002)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Method", |
| "sec_num": "5" |
| }, |
| { |
| "text": "SDSs for the car environment offer users a broad spectrum of tasks, e.g. dialing a telephone number, setting navigation options and tuning a radio frequency. The type of tasks in this environment can be represented by a directed, connected graph with marked and unmarked nodes ( fig. 3 ), through which users navigate and where the task is completed after they reach the desired node. The edges represent the transitions due to user utterances, and the nodes represent states of that dialogue space. Only a few edges were drawn, subdialogues (options and help requests, etc.) as well as the transitions caused by the command \"back\" were left out. Marked nodes are drawn with heavy line, and utterances are set in quotation marks. Unmarked nodes are transitional states: the SDS remains active after users have reached such states, and the dialogue strategy remains user initiated.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 279, |
| "end": 285, |
| "text": "fig. 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Modified \u03ba Calculation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Fragment A in figure 3 presents two possibilities: Users can navigate to the node View by choosing a view in the navigation menu (north, driving, arrows) -in figure 3 users chose \"arrow view\", or they can navigate to the node Scale by saying they want to change the scale of the map. In this last case, the system takes the initiative asking users what scale they want to have ( When users reach a marked node, usually either the dialogue is done immediately (node View), or the system takes the initiative to require information from the users, and then the dialogue is done (nodes Scale and Dial Number). But whether a task has been completed or not is not always that easy to answer. The crux of the matter is the goal of the users: If they just want to have the phone menu displayed, then the task is done after they reach the node Phone ( fig. 3) . In our SDS, tasks cannot be described in terms of AVMes.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 844, |
| "end": 851, |
| "text": "fig. 3)", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Modified \u03ba Calculation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Since our dialogues can not be represented by AVMs we had to define \u03ba in a different way. Instead of task attributes, we have specified for each task a set of nodes starting from the main menu and following the usual paths to the nodes that represent the test tasks. Figure 4 shows the AVM of the task 5 (dial a phone number), represented as a graph in figure 3 part B. Since the tasks for the tests are fixed, for each task a subset of nodes defines when it is complete. The black diagonal cells Ready represent the final states.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 267, |
| "end": 276, |
| "text": "Figure 4", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Modified \u03ba Calculation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In PARADISE only utterances referring to task attributes are recorded in the AVM. We also include those that contribute indirectly to accomplishing the tasks. For this purpose we introduce the following attributes: OP-TIONS/HELP, STOP, REPEAT, FAILURE, and BACK (for the prototype). FAILURE subsumes answer failures due to a voice recognition misunderstanding (grey columns in figure 4), answer failures due to a wrong user input (last diagonal cells) and correct system answers due to wrong user utterances (grey rows).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Modified \u03ba Calculation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "PARADISE computes only correctly recognised utterances or \"misunderstandings that are not corrected in the dialogue\" because \"the effect of misunderstandings that are corrected during the course of the dialogue are reflected in the costs associated with the dialogue\" (Walker et al., 1998) . Such an AVM is supposed to \"summarize how well an agent achieves the information requirements of a particular task\" (Walker et al., 1998) . But, since our dialogues are not based on information requirements, we do not have a set of attributes that have to be accomplished for the task to be successful. Therefore, we consider all utterances that occur during the dialogue in order to compute \u03ba. Following (Walker et al., 1998) , we consider the FAILURE cells in the calculation of the total number of utterances, but exclude it from the calculation of P (A) and P (E). Such an AVM summarizes how well users coped with the task.", |
| "cite_spans": [ |
| { |
| "start": 268, |
| "end": 289, |
| "text": "(Walker et al., 1998)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 408, |
| "end": 429, |
| "text": "(Walker et al., 1998)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 697, |
| "end": 718, |
| "text": "(Walker et al., 1998)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Modified \u03ba Calculation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u03ba is usually used to measure pairwise agreement among a set of coders making category judgments, correcting for chance expected agreement (Siegel and Castellan, 1988 ). There, P (A) is the proportion of times that the coders agree and P(E) is the proportion of times that one would expect them to agree by chance (\u03ba formula 1). This Kappa, we called it \u03ba * , is calculated in a slightly different way than in PARADISE (\u03ba P ). The definition of P (A) is the same in both cases (formula 2 1 ). In PARADISE, P (E) is calculated using only the columns of the matrix (formula 3), thus taking only the exchanged information into consideration, independently from who uttered it, system or users. The standard calculation of P (E) includes rows and columns (formula 4), so that using \u03ba * both system's and user's side are taken into con- sideration. We have calculated \u03ba * and \u03ba P to see which one correlates better with our data.", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 165, |
| "text": "(Siegel and Castellan, 1988", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Modified \u03ba Calculation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u03ba = P (A)\u2212P (E) 1\u2212P (E) (1) P (A) = p 11 + p 22 + p 33 + \u2022 \u2022 \u2022 + p nn (2) P (E) = p 2 .1 + p 2 .2 + p 2 .3 + \u2022 \u2022 \u2022 + p 2 .n (3) P (E) = p 1. p .1 + p 2. p .2 + p 3. p .3 + \u2022 \u2022 \u2022 + p n. p .n (4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Modified \u03ba Calculation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "For a better understanding of the formulas listed above, we display a matrix to illustrate the meaning of the used terms. A to N are the attributes, p xy are the number of times an attribute was chosen divided by the total number of utterances, T , and p x. and p .x are the sum of all values in row x over all columns and the sum of all values in column x over all rows, respectively:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Modified \u03ba Calculation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "A B \u2022 \u2022 \u2022 N A p 11 p 12 \u2022 \u2022 \u2022 p 1n p 1. B p 21 p 22 \u2022 \u2022 \u2022 p 2n p 2. . . . . . . . . . \u2022 \u2022 \u2022 . . . . . . N p n1 p n2 \u2022 \u2022 \u2022 p nn p n. p .1 p .2 \u2022 \u2022 \u2022 p .n", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Modified \u03ba Calculation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The data analysed so far from our experiments did not confirm the claim of a correlation between user satisfaction and \u03ba * together with the cost factors. Beside \u03ba * , we used the cost factors barge-in, help and option requests, and number of turns (section 6) as independent variables to calculate the performance function. Before the calculation all values were normalized to z-scores, so that we could easily control if there were outliers that would have distorted the comparison, but this was not the case. Using U S1 (section 7) as dependent variable, we obtained for the system B a coefficient of determination r 2 = 0.07. Therefore, we can not apply the multivariate linear regression proposed in PARADISE to calculate a performance function for our systems. In spite of that, we found \u03ba * to be a good measure to characterize how difficult it was for users to accomplish (or try to accomplish) the task. Further analysis of the data will show if this assumption is right.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Modified \u03ba Calculation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In this section we present the results of our test series. For our evaluation we use the usual metrics as described in Larsen (2003a) and NIST (2001) . We compare the following cost factors for systems A and B: Task duration, number of turns, task success, number of barge-in attempts at the beginning of system utterances, and number of option and help requests.", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 133, |
| "text": "Larsen (2003a)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 138, |
| "end": 149, |
| "text": "NIST (2001)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of the Driving Part", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Figures 5 shows how long it took the users to complete the different tasks in the two systems. Test subjects for series A (T S A ) needed on average 62.1 sec to complete a task, and test subjects for series B (T S B ) 47.0 sec 2 . Seven of the eleven tasks were accomplished faster with system B than with A. The results for the other four tasks differ from what was expected: First, the longer task completion times for tasks 7 and 9 in the prototype. This can be largely explained by the circumstance that test subjects were all novices. The system prompts of B for these tasks were much longer than the ones of A, for example:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Duration", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "2 Interruptions due to traffic conditions were documented during the test and then used to adequately rectify the times. Second, the different progress between tasks 1 and 2, and their reiterations, 10 and 11, for both systems. The task duration for task 10 is in both systems lower than in task 1, but more remarkable in system A. Users of this system needed for task 10 on average only 30% of the time they needed for task 1. Task duration of task 11 decreased only in system A. In system B these values remained almost the same as for task 2. This may indicate that users of the reference system learned faster that they can speak the tasks they want to activate directly (shortcuts). The help given to the novices in the prototype seems to slow down this insight among the users of this system. They repeatedly applied the same tactics, they followed the menu structure of the system instead of speaking the desired commands directly. The effect on user satisfaction will be discussed in section 7. Figure 6 shows how many turns users needed to complete the different tasks in the two systems. T S A needed on average 8.7 turns to complete a task, and T S B 6.9 turns. Seven of the eleven tasks were accomplished with less interactions in system B than in A. The results for the other four tasks differ from what was expected. First, T S B needed more turns to complete tasks 7 and 9 than T S B . This can be explained by the kind of system utterances T S B got. Test subjects were all novices and, therefore, these utterances told the users which commands they could speak next. Most users employed exactly the commands offered by the system, what lead them to follow each time the menu structure rather than skipping nodes, i.e. using shortcuts. Second, the different progress between the tasks 1 and 2, and their reiterations, 10 and 11, for both systems. T S A needed for task 10 only ca. 30% of the turns they needed for task 1. T S B still needed ca. 66% of the turns to complete the task. The number of turns of task 11 decreased only in system A. For system B these values remained almost the same as for task 2. These can also be explained by the kind of utterances in system B. In many cases, these users were treated as experts while solving tasks 1 an 2 the second time, i.e. tasks 10 and 11. However, they behaved as they had learned and skipped almost no nodes.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1003, |
| "end": 1011, |
| "text": "Figure 6", |
| "ref_id": "FIGREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Task Duration", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "As tables 5 and 6 show, there exists a high correlation between task duration and number of turns. Therefore, either can be used for calculating the performance function. According to Nielsen (1993) , systems designed for novices should be easy to learn, i.e. the learning curve should be very steep at the beginning. Comparing tasks 1 and 2 with 10 and 11 (tables 5 and 6), we observed that T S B reached very fast the asymptote of the curve, i.e. users learned very fast how to use our prototype. The sys-tem prompts for novices served their purpose. Our tests confirmed that the initial part of the learning curve for the prototype's users corresponds to the recommended shape.", |
| "cite_spans": [ |
| { |
| "start": 184, |
| "end": 198, |
| "text": "Nielsen (1993)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Number of Turns", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "T S A learned by trial and error that they can speak the tasks they want to activate directly, leaving out the nodes between. The first time they completed tasks 1 and 2 they were not so successful as T S B , but they were more efficient the second time they completed those tasks. According to Nielsen (1993) systems designed for experts are hard to learn but highly efficient, i.e. the learning curve is even at the beginning (Nielsen, 1993) . The next question is if our prototype would also fulfill the requirements stated by Nielsen (1993) for experts. The prompts of system B for experts turn quite the same as those of system A, this improves the efficiency. Furthermore, the prototype offers users a \"suggestion\" feature to learn better ways of completing a task (cf. section 3.2). Long term experiments still have to show if system B displays a typical expert learning curve over the time. Figure 7 compares the task success rates for both systems. For system B the mean success rate reached 94%, system A's mean success rate was 78%. Only 3% of the tasks could not be completed at all, in either system, usually because users gave up. Ca. 15% of the tasks in system A, and ca. 3% of the tasks in system B were accomplished only partly, most frequently because users were a bit confuse and asked the experimenter for a hint or because they said the right command but the system did not understand. The ASR system was the same in both series, therefore, the main reason for this difference (15% and 3% for system A and B) was that T S B were less confuse about what to say next. This confirms the benefit of telling novices the available commands. 6.4 Other Cost Factors Figure 8 shows a comparison of four cost factors (number of option, help, and OOV-help requests, and number of barge-in attempts at the end of system prompts). The commands \"help\" and \"options\" are described in secion 3.2. Under OOV-help requests we added up help requests for which users employed OOV-words. With the cost factor barge-in we consider utterances spoken before the ASR was listen. All four factors were divided by the number of turns needed to accomplish each task. The comparison of relative values allows us to subtract the influence of number of turns from these cost factors.", |
| "cite_spans": [ |
| { |
| "start": 295, |
| "end": 309, |
| "text": "Nielsen (1993)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 428, |
| "end": 443, |
| "text": "(Nielsen, 1993)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 530, |
| "end": 544, |
| "text": "Nielsen (1993)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 899, |
| "end": 907, |
| "text": "Figure 7", |
| "ref_id": "FIGREF7" |
| }, |
| { |
| "start": 1679, |
| "end": 1687, |
| "text": "Figure 8", |
| "ref_id": "FIGREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Number of Turns", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Reference System Prototype 0,00% The dialogue costs were in every task lower for the prototype than for the reference system. Most remarkable is the decrease of option requests for system B. T S B asked only a 1 5 of the times T S A did. The reason was that T S B got the available commands from the system, without having to ask for them. Therefore, they usually knew what to say. The same applies to the number of help requests. T S B asked for help 1 3 of the times T S A did. The number of OOV help requests was for both systems almost the same. Barge-in was in series B nine times more frequent than in series A. This system signaled users that they could speak with a tone at the end of every prompt, only then the ASR was active. System B relied on the turn taking theory of the conversational analysis (Clark, 1997) and omitted that additional auditive turn taking signal. The strong decrease of commands uttered ahead of time verified that this strategy was the more natural. The comparison of these four cost factors confirms that users cope better with the prototype.", |
| "cite_spans": [ |
| { |
| "start": 810, |
| "end": 823, |
| "text": "(Clark, 1997)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Success", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "The questionnaire uses a Likert scale with four choices ranging from strongly opposed (1) to strongly in favour (4). It consists of four parts: questions about the participant, about his technical background, about the test (users' attitude towards the system), and about the system (how users judge the system's ergonomics).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of the Questionnaire", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We calculated two factors to measure the user satisfaction (US 1 and US 2 ). US 1 subsumes three answers to questions about the test: \"I could complete all tasks without problems\", \"I find the system easy to use\", and \"I got frequently upset during the test\". US 2 subsumes three answers to questions about the system: \"I would recommend the system\", \"I really want to have such a system in my car\", and \"I find the system very useful\". Figures 9 and 10 show the values for US 1 and US 2 for both systems and over the task completion rate. Users rated US 1 and US 2 better for series B than for series A.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 437, |
| "end": 454, |
| "text": "Figures 9 and 10", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation of the Questionnaire", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Task Complete? We summarize users' evaluation of the features in table 5. The features marked with an asterisk were provided only in system B. The other features were available in both systems. In the second column we have specified the effect on usability of each feature. Below we discuss the evidence found for each feature (I-1 to I-4, and E-1 to E-5). The statements in brackets are questions from the questionnaire, and the percentages allude to the number of users (strongly) agreeing with the statement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "No Partly Yes", |
| "sec_num": null |
| }, |
| { |
| "text": "I-1: Many more T S B than T S A (almost) never looked at the GUI. They did not need to do so because the knew the commands. In general, users found the enumeration of the available options a good means to learn the system. Therefore and because they knew they could asked for options and help (E-1, E-2), they approved of adaptation. I-3: Considerable more T S A than T S B asserted that \"help prompts were not (at all) helpful\" did. This difference may be explained by the time help was issued in both systems. While in series B novices got help right away after saying a command, T S A had to wait the second ASR-failure or timeout to get system initiated help. At that time, many users were already confused and found the offered options not so helpful anymore.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "No Partly Yes", |
| "sec_num": null |
| }, |
| { |
| "text": "I-4: The tests also confirmed the importance of the graphical context for usability. Users expect the text on the GUI to be voice commands.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "No Partly Yes", |
| "sec_num": null |
| }, |
| { |
| "text": "E-1, E-2: The results about these features were contradictory. On the one hand, more T S A than T S B stated that they did not know how to ask for help. But, on the other hand, every test subject asked at least once for help, using either the \"help\" or the \"options\" command. Maybe they were not aware of it, but they use the commands in an instinctive way. E-3: The \"suggestion\" command was rated differently by T S A and T S B . While T S A had some doubts about this feature, T S B , having tested it, approved of it. E-4: The tests verified that error recovering is normally very difficult to deal with for users, and that users' expectations due to knowledge transfer are extremely per-sistent (Norman, 2002) . Therefore, the \"back\" command had broad acceptance among users. E-5: The command \"up\" had not the same positive impact on the usability of the system as \"back\". Thus, the contribution for the usability improvement of this command does not justify the expensive implementation.", |
| "cite_spans": [ |
| { |
| "start": 699, |
| "end": 713, |
| "text": "(Norman, 2002)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "No Partly Yes", |
| "sec_num": null |
| }, |
| { |
| "text": "We calculated two task success measures based on PAR-ADISE, \u03ba P and \u03ba * , but we could not find a linear relation between US and task success plus cost factors. Consequently, we could not use these methods to calculate system performance. However, \u03ba * proved to be appropriate to assess how difficult it was for the users to accomplish (or try to accomplish) the task. Table 6 shows a comparison of \u03ba * values for tasks 1 to 5 for both systems. These values show that users dealt better with the prototype. Table 6 : \u03ba * for Reference System (A) and Prototype (B)", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 369, |
| "end": 376, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 507, |
| "end": 514, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Users' levels of satisfaction US 1 and US 2 were almost completely unrelated to success rates. One reason for this finding may lie in the novelty of voice interfaces in the automotive environment. The characteristics of the test subjects largely agreed with those of early adopters: young, urban, and highly educated. For such users, the main goal of operating an innovative system is the interaction itself, not task completion. Experiments with real customers should be carried out to confirm this hypothesis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Another reason for the absence of correlation might be the redundancy of the system. Voice interface is not the only input device but an additional possibility, besides the manual input, to operate the comfort tasks at disposition in the car. Therefore, the requirements of the users are others than, e.g. for telephony SDSs, where the voice interface is the sole input device.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "All subjective and nearly all objective measures were better for series B. Test persons had not used the voice interface in the car before. The results of our evaluations confirm the expected positive effects of prompt adaptation and the other proposed features. But we do not know how experts would cope with the systems. On the one hand, the comparison of tasks 1 and 2 with their repetitions 10 and 11 showed that the learning curve was very steep for system B. On the other hand, tasks 7 and 9 suggest that the extended prompts for novices in system B could lead users to operate the system in a less straightforward manner than system A because they did not use shortcuts. The prompts of system B become the same as in system A when users turn experts. Will experts change their habits and learn the shortcuts? Long term evaluations have to be performed to investigate the benefit of the proposed features over time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "In this definition p.x is equivalent to tx T in PARADISE.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank Klaus Schulz (LMU, Munich) for helpful discussions clarifying our ideas and for comments on earlier drafts. We also thank Stefan P\u00f6hn (Berner & Mattner) for the programming, helping to make our, often chaotic, ideas concrete. Thanks to Alexander Huber (BMW AG) for his continuing encouraging support.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Aknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Usability Evaluation Issues in Natural Interactive and Multimodal Systems -State of the Art and Current Practice (Draft Version)", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [ |
| "J" |
| ], |
| "last": "Aguilera", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "O" |
| ], |
| "last": "Bernsen", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Project SIMILAR SIG7 on Usability and Evaluation, Deliverable D16", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. J. G\u00f3mez Aguilera and N. O. Bernsen et al. 2004. Usability Evaluation Issues in Natural Interactive and Multimodal Systems -State of the Art and Current Practice (Draft Version). Technical report. Project SIMILAR SIG7 on Usability and Evaluation, Deliv- erable D16.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Skizzierung eines allgemeinen Szenarios f\u00fcr Bediendialoge . Jahresbericht 1998 der Forschungsgruppe Wissensverarbeitung am Bayerischen Forschungszentrum f\u00fcr wissensbasierte Systeme", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Aretoulaki", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Ludwig", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Aretoulaki and B. Ludwig. 1998. Skizzierung eines allgemeinen Szenarios f\u00fcr Bediendialoge . Jahres- bericht 1998 der Forschungsgruppe Wissensverar- beitung am Bayerischen Forschungszentrum f\u00fcr wis- sensbasierte Systeme, http://www-wv.informatik.uni- erlangen.de/fg-wv/.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "PROMISE -A Procedure for Multimodal Interactive System Evaluation", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Beringer", |
| "suffix": "" |
| }, |
| { |
| "first": "U", |
| "middle": [], |
| "last": "Kartal", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Louka", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Schiel", |
| "suffix": "" |
| }, |
| { |
| "first": "U", |
| "middle": [], |
| "last": "T\u00fcrk", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N. Beringer, U. Kartal, K. Louka, F. Schiel, and U. T\u00fcrk. 2002. PROMISE -A Procedure for Multimodal In- teractive System Evaluation. Technical report, LMU M\u00fcnchen, Institut f\u00fcr Phonetik und sprachliche Kom- munikation. Teilprojekt 1: Modalit\u00e4tsspezifische Analysatoren, Report Nr. 23.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Exploring Natural Interaction in the Car", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [ |
| "O" |
| ], |
| "last": "Bernsen", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Dybkjaer", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "International Workshop on Information Presentation and Natural Multimodal Dialogue", |
| "volume": "", |
| "issue": "", |
| "pages": "75--79", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N. O. Bernsen and L. Dybkjaer. 2001. Exploring Natural Interaction in the Car. In International Workshop on Information Presentation and Natural Multimodal Di- alogue, pages 75-79, Verona, Italy 14-15 Dec. 2001.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Assessing Agreement on Classification Tasks: The Kappa Statistic", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carletta", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Computational Linguistics", |
| "volume": "22", |
| "issue": "2", |
| "pages": "249--254", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Carletta. 1996. Assessing Agreement on Classification Tasks: The Kappa Statistic. Computational Linguis- tics, 22(2):249-254.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Using Language. Cambridge University Press", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "H" |
| ], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. H. Clark. 1997. Using Language. Cambridge Univer- sity Press, Cambridge, New York, Melbourne.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Mensch Maschine Methodik", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Dix", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Finlay", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Abowd", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Dix, J. Finlay, and G. Abowd. 1995. Mensch Mas- chine Methodik. Prentice Hall.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Designing Co-Operativity in Spoken Human-Machine Dialogue", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Dybkjaer", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "O" |
| ], |
| "last": "Bernsen", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Dybkjaer", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Research Reports Esprit", |
| "volume": "2", |
| "issue": "", |
| "pages": "104--124", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Dybkjaer, N. O. Bernsen, and H. Dybkjaer, 1997. De- signing Co-Operativity in Spoken Human-Machine Di- alogue, volume 2 of Research Reports Esprit, pages 104-124. Springer Verlag.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Guidelines for ICT Products and Services; Design for All. European Telecommunications Standards Institute (ETSI)", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guidelines for ICT Products and Services; Design for All. European Telecommunications Standards Institute (ETSI).", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Human-Computer Interaction in a Smart Home Environment", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Hampicke", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "4th International Congress on Gerontechnology", |
| "volume": "", |
| "issue": "", |
| "pages": "1--6", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Fellbaum and M. Hampicke. 2002. Human-Computer Interaction in a Smart Home Environment. In 4th International Congress on Gerontechnology, Miami Beach, USA, pages 1-6, November 9-12.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Spracheingabe im neuen BMW 6er. Sonderheft ATZ/MTZ (Der neue BMW 6er)", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hagen", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Said", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Eckert", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "134--139", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Hagen, T. Said, and J. Eckert. 2004. Spracheingabe im neuen BMW 6er. Sonderheft ATZ/MTZ (Der neue BMW 6er), May:134-139.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The Display and Control Concept iDrive -Quick Access to All Driving and Comfort Functions", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Haller", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Haller. 2003. The Display and Control Concept iDrive -Quick Access to All Driving and Comfort Functions.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "The New BMW 5-Series)", |
| "authors": [ |
| { |
| "first": "Atz/Mtz", |
| "middle": [], |
| "last": "Extra", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "51--53", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "ATZ/MTZ Extra (The New BMW 5-Series), August:51- 53.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Adaptation of an Automotive Dialogue System to Users's Expertise", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Hassel", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hagen", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "6th SIGdial Workshop on Discourse and Dialogue", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Hassel and E. Hagen. 2005. Adaptation of an Au- tomotive Dialogue System to Users's Expertise. In 6th SIGdial Workshop on Discourse and Dialogue, Lisbon, Portugal, 2-3 September 2005. Draft Version.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Evaluating AdApt, a Multi-Modal Conversational, Dialogue System Using PAR-ADISE. Master's thesis, Department of Speech Music and Hearing", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Hjalmarsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Hjalmarsson. 2002. Evaluating AdApt, a Multi- Modal Conversational, Dialogue System Using PAR- ADISE. Master's thesis, Department of Speech Mu- sic and Hearing, KTH Royal Institute of Technology, Stockholm.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Evaluation Methodologies for Spoken and Multi Modal Dialogue Systems -Revision 2", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [ |
| "B" |
| ], |
| "last": "Larsen", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. B. Larsen. 2003a. Evaluation Methodologies for Spo- ken and Multi Modal Dialogue Systems -Revision 2. May 2003 (Draft Version). Presented at the COST 278 MC-Meeting in Stockholm 2.-4. May 2003.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Issues in the Evaluation of Spoken Dialogue Systems using Objective and Subjective Measures", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [ |
| "B" |
| ], |
| "last": "Larsen", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 8th IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. B. Larsen. 2003b. Issues in the Evaluation of Spo- ken Dialogue Systems using Objective and Subjective Measures. In Proceedings of the 8th IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), St. Thomas, U.S. Virgin Islands, Dec. 2003.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Usability Engineering", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nielsen", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Nielsen. 1993. Usability Engineering. Academic Press Professional, Boston u. a.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Common Industry Format for Usability Test Reports -Version 2.0", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nist", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "NIST. 2001. Common Industry Format for Usability Test Reports -Version 2.0, May 18, 2001. Technical report, National Institute of Standards and Technology.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "The Design of Everyday Things", |
| "authors": [ |
| { |
| "first": "Donald", |
| "middle": [ |
| "A" |
| ], |
| "last": "Norman", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Donald A. Norman. 2002. The Design of Everyday Things. Basic Books, New York.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Empirical Methods for Evaluating Dialog Systems", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Paek", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "ACL 2001 Workshop on Evaluation Methodologies for Language and Dialogue Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Paek. 2001. Empirical Methods for Evaluating Dia- log Systems. In ACL 2001 Workshop on Evaluation Methodologies for Language and Dialogue Systems, Toulouse, France.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Nonparametric statistics for the behavioral sciences", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Siegel", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "J" |
| ], |
| "last": "Castellan", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Singapore. think3. 2000. Think3: thinkdesign 6.0 Debuts To Rave Reviews. Press Releases", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Siegel and N. J. Castellan. 1988. Nonparametric sta- tistics for the behavioral sciences. McGraw-Hill Inter- national, Singapore. think3. 2000. Think3: thinkdesign 6.0 De- buts To Rave Reviews. Press Releases, http://www.think3.com/en/news/.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Evaluating Spoken Dialogue Agents with PAR-ADISE: Two Case Studies", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "A" |
| ], |
| "last": "Walker", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "J" |
| ], |
| "last": "Litman", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "A" |
| ], |
| "last": "Kamm", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Abella", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Computer Speech and Language", |
| "volume": "12", |
| "issue": "3", |
| "pages": "317--347", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. A. Walker, D. J. Litman, C. A. Kamm, and A. Abella. 1998. Evaluating Spoken Dialogue Agents with PAR- ADISE: Two Case Studies. Computer Speech and Language, 12(3):317-347.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Let's Stop Pushing the Envelope and Start Addressing It: A Reference Task Agenda for HCI", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Whittaker", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Terveen", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [ |
| "A" |
| ], |
| "last": "Nardi", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Human Computer Interaction", |
| "volume": "15", |
| "issue": "", |
| "pages": "75--106", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Whittaker, L. Terveen, and B. A. Nardi. 2000. Let's Stop Pushing the Envelope and Start Addressing It: A Reference Task Agenda for HCI. Human Computer Interaction, 15:75-106.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "text": "Controller and PTT-button", |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "num": null, |
| "text": "Fragments of the Dialogue Space", |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "num": null, |
| "text": "Calculation of \u03ba * for Task 5 (Dial a Phone Number)", |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "num": null, |
| "text": ", you can say new destination, last destinations, route, change map style, or change map scale. 1 (choose frequency 93.3) 2 (choose station bayern 5) 3 (play titel x of the current cd) 4 (activate traffic programm) 5 (dial a phone number)6 (dial a name from the address book) 7 (display the navigation map) 8 (change the map scale) 9 (change the map style)10 (choose an arbitrary frequency) 11 (choose an arbitrary station)", |
| "type_str": "figure" |
| }, |
| "FIGREF6": { |
| "uris": null, |
| "num": null, |
| "text": "Number of Turns", |
| "type_str": "figure" |
| }, |
| "FIGREF7": { |
| "uris": null, |
| "num": null, |
| "text": "Did Users Complete the Tasks?", |
| "type_str": "figure" |
| }, |
| "FIGREF8": { |
| "uris": null, |
| "num": null, |
| "text": "Relative Number of Option Requests, Help Requests, and Barge-in.", |
| "type_str": "figure" |
| }, |
| "FIGREF9": { |
| "uris": null, |
| "num": null, |
| "text": "Figure 9: User Satisfaction (1)", |
| "type_str": "figure" |
| }, |
| "FIGREF10": { |
| "uris": null, |
| "num": null, |
| "text": ") never looked at the display:T S B : 68%, T S A : 45%.Options should be prompted every time: T S B : 77%, T S A : 27%) were not (at all) helpful: T S B : 36%, T S A : 64%) absolutely necessary: T S B : 100%, T S A : 86% It was easy to rectify a misunderstanding:T S B : 45%, T S A : 27%) E-5* (Up) Neutral", |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "num": null, |
| "content": "<table><tr><td>: Sample Dialogue</td></tr></table>", |
| "text": "", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF2": { |
| "num": null, |
| "content": "<table><tr><td>Test Series</td><td>A</td><td>B</td></tr><tr><td/><td>(Reference System)</td><td>(Prototype)</td></tr><tr><td>Mean Age</td><td>28,77</td><td>25,64</td></tr><tr><td>(Range)</td><td>(21 -43 years old)</td><td>(22 -33 years old)</td></tr><tr><td>Number</td><td>22</td><td>22</td></tr><tr><td>of Subjects</td><td colspan=\"2\">(15 male, 7 female) (15 male, 7 female)</td></tr></table>", |
| "text": "summarizes the participants' characteristics.", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF3": { |
| "num": null, |
| "content": "<table/>", |
| "text": "", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF5": { |
| "num": null, |
| "content": "<table/>", |
| "text": "Test Tasks", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF6": { |
| "num": null, |
| "content": "<table><tr><td>user:</td><td>Navigation menu.</td></tr><tr><td colspan=\"2\">system: Navigation. You can say route criteria, map, ...</td></tr><tr><td>user:</td><td>Map.</td></tr><tr><td colspan=\"2\">system: Map. You can say map style, or change scale.</td></tr><tr><td>user:</td><td>Change scale.</td></tr><tr><td colspan=\"2\">system: Choose a scale.</td></tr><tr><td>user:</td><td>200 meters</td></tr><tr><td colspan=\"2\">system: Scale changed to 200 meters</td></tr></table>", |
| "text": ". In B users navigate to the node Dial Number, where they are asked to enter a telephone number. This subdialogue is displayed inside the node.", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF7": { |
| "num": null, |
| "content": "<table/>", |
| "text": "Dialogue Leading to the Scale Node inFigure 3", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF11": { |
| "num": null, |
| "content": "<table/>", |
| "text": "Features and Test Results (*feature is provided only in the prototype)", |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |