| { |
| "paper_id": "N03-1012", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:06:53.993140Z" |
| }, |
| "title": "Semantic Coherence Scoring Using an Ontology", |
| "authors": [ |
| { |
| "first": "Iryna", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "European Media Lab GmbH Schloss", |
| "institution": "", |
| "location": { |
| "addrLine": "Wolfsbrunnenweg 31c", |
| "postCode": "D-69118", |
| "settlement": "Heidelberg", |
| "country": "Germany" |
| } |
| }, |
| "email": "gurevych@eml.org\u00a1" |
| }, |
| { |
| "first": "Rainer", |
| "middle": [], |
| "last": "Malaka", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "European Media Lab GmbH Schloss", |
| "institution": "", |
| "location": { |
| "addrLine": "Wolfsbrunnenweg 31c", |
| "postCode": "D-69118", |
| "settlement": "Heidelberg", |
| "country": "Germany" |
| } |
| }, |
| "email": "malaka@eml.org\u00a1" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Porzel", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "European Media Lab GmbH Schloss", |
| "institution": "", |
| "location": { |
| "addrLine": "Wolfsbrunnenweg 31c", |
| "postCode": "D-69118", |
| "settlement": "Heidelberg", |
| "country": "Germany" |
| } |
| }, |
| "email": "porzel@eml.org\u00a1" |
| }, |
| { |
| "first": "Hans-Peter", |
| "middle": [], |
| "last": "Zorn", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "European Media Lab GmbH Schloss", |
| "institution": "", |
| "location": { |
| "addrLine": "Wolfsbrunnenweg 31c", |
| "postCode": "D-69118", |
| "settlement": "Heidelberg", |
| "country": "Germany" |
| } |
| }, |
| "email": "zorn@eml.org\u00a1" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper we present ONTOSCORE, a system for scoring sets of concepts on the basis of an ontology. We apply our system to the task of scoring alternative speech recognition hypotheses (SRH) in terms of their semantic coherence. We conducted an annotation experiment and showed that human annotators can reliably differentiate between semantically coherent and incoherent speech recognition hypotheses. An evaluation of our system against the annotated data shows that, it successfully classifies 73.2% in a German corpus of 2.284 SRHs as either coherent or incoherent (given a baseline of 54.55%).", |
| "pdf_parse": { |
| "paper_id": "N03-1012", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper we present ONTOSCORE, a system for scoring sets of concepts on the basis of an ontology. We apply our system to the task of scoring alternative speech recognition hypotheses (SRH) in terms of their semantic coherence. We conducted an annotation experiment and showed that human annotators can reliably differentiate between semantically coherent and incoherent speech recognition hypotheses. An evaluation of our system against the annotated data shows that, it successfully classifies 73.2% in a German corpus of 2.284 SRHs as either coherent or incoherent (given a baseline of 54.55%).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Following Allen et al. (2001) , we can distinguish between controlled and conversational dialogue systems. Since controlled and restricted interactions between the user and the system increase recognition and understanding accuracy, such systems are reliable enough to be deployed in various real world applications, e.g. public transportation or cinema information systems. The more conversational a dialogue system becomes, the less predictable are the users' utterances. Recognition and processing become increasingly difficult and unreliable.", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 29, |
| "text": "Allen et al. (2001)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Today's dialogue systems employ domain-and discourse-specific knowledge bases, so-called ontologies, to represent the individual discourse entities as concepts, and their relations to each other. In this paper we present an algorithm for measuring the semantic coherence of sets of concepts against such an ontology. In the following, we will show how the semantic coherence measurement can be applied to estimate how well a given speech recognition hypothesis (SRH) fits with respect to the existing knowledge representation, thereby providing a mechanism that increases the robustness and reliability of dialogue systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In Section 2 we discuss the problem of scoring and classifying SRHs in terms of their semantic coherence followed by a description of our annotation experiment. Section 3 contains a description of the kind of knowledge representations employed by ONTOSCORE. We present the algorithm in Section 4, and an evaluation of the corresponding system for scoring SRHs is given in Section 5. A conclusion and additional applications are given in Section 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Recognition Hypotheses", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Coherence and Speech", |
| "sec_num": "2" |
| }, |
| { |
| "text": "While a simple one-best hypothesis interface between automatic speech recognition (ASR) and natural language understanding (NLU) suffices for restricted dialogue systems, more complex systems either operate on n-best lists as ASR) output or convert ASR word graphs (Oerder and Ney, 1993) into n-best lists, given the distribution of acoustic and language model scores (Schwartz and Chow, 1990; Tran et al., 1996) . For example, in our data a user expressed the wish to see a specific city map again, as: 1", |
| "cite_spans": [ |
| { |
| "start": 265, |
| "end": 287, |
| "text": "(Oerder and Ney, 1993)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 368, |
| "end": 393, |
| "text": "(Schwartz and Chow, 1990;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 394, |
| "end": 412, |
| "text": "Tran et al., 1996)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Problem", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "(1) Facing multiple representations of a single utterance consequently poses the question, which of the different hypotheses corresponds most likely to the user's utterance. Several ways of solving this problem have been proposed and implemented in various systems. Frequently the scores provided by the ASR system itself are used, e.g. acoustic and language model probabilities. More recently also scores provided by the NLU system have been employed, e.g. parsing scores or discourse scores (Litman et al., 1999; Engel, 2002; Alexandersson and Becker, 2003) . However, these methods assign higher scores to SRHs which are semantically incoherent and lower scores to semantically coherent ones and disagree with other.", |
| "cite_spans": [ |
| { |
| "start": 493, |
| "end": 514, |
| "text": "(Litman et al., 1999;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 515, |
| "end": 527, |
| "text": "Engel, 2002;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 528, |
| "end": 559, |
| "text": "Alexandersson and Becker, 2003)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Problem", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Ich", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Problem", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "For instance, the acoustic and language model scores of Example (1b) are actually better than for Example (1a), which results from the fact that the frequencies and corresponding probabilities for important expressions, such as Good Bye, are rather high, thereby ensuring their reliable recognition. Another phenomenon found in our data consists of hypotheses such as: In these cases language model scores are higher for Example (2) than Example (3), as the incorrect inflection on alle Filmen was less frequent in the training material than that of the correct inflection on alle Vergn\u00fcgen.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Problem", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Our data also shows -as one would intuitively expect -that the understanding-based scores generally reflect how well a given SRH is covered by the grammar employed. In many less well-formed cases these scores do not correspond to the correctness of the SRH. Generally we find instances where all existing scoring methods disagree with each other, diverge from the actual word error rate and ignore the semantic coherence. 2 Neither of the aforementioned approaches systematically employs the system's knowledge of the domains at hand. This increases the number of times where a suboptimal recognition hypothesis is passed through the system. This means that, while there was a better representation of the actual utterance in the n-best list, the NLU system is processing an inferior one, thereby causing overall dialogue metrics, in the sense of Walker et al. (2000) , to decrease. We propose an alternative way to rank SRHs on the basis of their semantic coherence with respect to a given ontology representing the domains of the system.", |
| "cite_spans": [ |
| { |
| "start": 847, |
| "end": 867, |
| "text": "Walker et al. (2000)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Problem", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In a previous study (Gurevych et al., 2002) , we tested if human annotators could reliably classify SRHs in terms of their semantic coherence. The task of the annotators was to determine whether a given hypothesis representsa n internally coherent utterance or not.", |
| "cite_spans": [ |
| { |
| "start": 20, |
| "end": 43, |
| "text": "(Gurevych et al., 2002)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Experiments", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In order to test the reliability of such annotations, we collected a corpus of SRHs. The data collection was conducted by means of a hidden operator test. We had 29 subjects prompted to say certain inputs in 8 dialogues. 1.479 turns were recorded. Each user-turn in the dialogue corresponded to a single intention, e.g. a route request or a sight information request. The audio files were then sent to the speech recognizer and the input to the semantic coherence scoring module, i.e. n-best lists of SRHs were recorded in log-files. The final corpus consisted of 2.284 SRHs. All hypotheses were then randomly mixed to avoid contextual influences and given to separate annotators. The resulting Kappa statistics (Carletta, 1996) over the annotated data yields", |
| "cite_spans": [ |
| { |
| "start": 712, |
| "end": 728, |
| "text": "(Carletta, 1996)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Experiments", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u00a2 \u00a1 \u00a3 \u00a5 \u00a4 \u00a7 \u00a6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Experiments", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": ", which seems to indicate that human annotators can reliably distinguish between coherent samples (as in Example (1a)) and incoherent ones (as in Example (1b)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Experiments", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The aim of the work presented here, then, was to provide a knowledge-based score, that can be employed by any NLU system to select the best hypothesis from a given n-best list. ONTOSCORE, the resulting system will be described below, followed by its evaluation against the human gold standard.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Experiments", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In this section, we provide a description of the preexisting knowledge source employed by ONTOSCORE, as far as it is necessary to understand the empirical data generated by the system. It is important to note that the ontology employed in this evaluation existed already and was crafted as a general knowledge representation for various processing modules within the system. 3 Ontologies have traditionally been used to represent general and domain specific knowledge and are employed for various natural language understanding tasks, e.g. semantic interpretation (Allen, 1987) . We propose an additional way of employing ontologies, i.e. to use the knowledge modeled therein as the basis for evaluating the semantic coherence of sets of concepts.", |
| "cite_spans": [ |
| { |
| "start": 375, |
| "end": 376, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 564, |
| "end": 577, |
| "text": "(Allen, 1987)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Knowledge Base", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The system described herein can be employed independently of the specific ontology language used, as the underlying algorithm operates only on the nodes and named edges of the directed graph represented by the ontology. The specific knowledge base, e.g. written in DAML+OIL or OWL, 4 is converted into a graph, consisting of: the class hierarchy, with each class corresponding to a concept representing either an entity or a process; the slots, i.e. the named edges of the graph corresponding to the class properties, constraints and restrictions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Knowledge Base", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The ontology employed herein has about 730 concepts and 200 relations. It includes a generic top-level ontology whose purpose is to provide a basic structure of the world, i.e. abstract classes to divide the universe in distinct parts as resulting from the ontological analysis. The top-level was developed following the procedure outlined in Russell and Norvig (1995) .", |
| "cite_spans": [ |
| { |
| "start": 343, |
| "end": 368, |
| "text": "Russell and Norvig (1995)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Knowledge Base", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In the view of the ontology employed herein, Role is the most general class in the ontology and represents a role that any entity or process can perform. It is divided into Event and Abstract Event. Event is used to describe a kind of role any entity or process may have in a \"real\" situation or process, e.g. a building or an information search. It is contrasted with Abstract Event, which is abstracted from a set of situations and processes. It reflects no reality and is used for the general categorization and description, e.g. Number, Set, Spatial Relation. There are two kinds of events: Physical Object and Process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Knowledge Base", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The class Physical Object describes any kind of objects we come in contact with -living as well as nonliving -having a location in space and time in contrast to abstract objects. These objects refer to different domains, such as Sight and Route in the tourism domain, Av Medium and Actor in the TV and cinema domain, etc., and can be associated with certain relations in the processes via slot constraint definitions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Knowledge Base", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The modeling of Process as a kind of event that is continuous and homogeneous in nature, follows the frame semantic analysis used for generating the FRAMENET data (Baker et al., 1998) . Currently, there are four groups of processes (see Information Search Process features one additional slot constraint, piece-of-information. The possible slot-fillers are a range of domain objects, e.g. Sight, Performance, or whole sets of those, e.g. Tv Program, but also processes, e.g. Controlling Tv Device Process. This way, an utterance such as: ", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 183, |
| "text": "(Baker et al., 1998)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Knowledge Base", |
| "sec_num": "3" |
| }, |
| { |
| "text": "ONTOSCORE performs a number of processing steps, each of them will be described separately in the respective subsections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ontology-based Scoring of SRHs", |
| "sec_num": "4" |
| }, |
| { |
| "text": "A necessary preprocessing step is to convert each SRH into a concept representation (CR). For that purpose we augmented the system's lexicon with specific concept mappings. That is, for each entry in the lexicon either zero, one or many corresponding concepts where added. A simple vector of the concepts, corresponding to the words in the SRH for which concepts in the lexicon exist, constitutes the resulting CR. All other words with empty concept mappings, e.g. articles, are ignored in the conversion. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mapping of SRH to Sets of Concepts", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "ONTOSCORE converts the domain model, i.e. an ontology, into a directed graph with concepts as nodes and relations as edges. One additional problem that needed to be solved lies in the fact that the directed subclass-of relations enable path algorithms to ascend the class hierarchy upwards, but do not let them descend, therefore missing a significant set of possible paths. In order to remedy that situation the graph was enriched during its conversion by corresponding parent-of relations, which eliminated the directionality problems as well as avoids cycles and 0paths. In order to find the shortest path between two concepts, ONTOSCORE employs the single source shortest path algorithm of Dijkstra (Cormen et al., 1990) . Given a concept representation CR", |
| "cite_spans": [ |
| { |
| "start": 694, |
| "end": 724, |
| "text": "Dijkstra (Cormen et al., 1990)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mapping of CR to Graphs", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u00a2 \u00a1 \u00a4 \u00a3 , ..., \u00a1 \u00a6 \u00a5 \u00a7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mapping of CR to Graphs", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": ", the algorithm runs once for each concept. The Dijkstra algorithm calculates minimal paths from a source node to all other nodes. Then, the minimal paths connecting a given concept ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mapping of CR to Graphs", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To score the minimal paths connecting all concepts with each other in a given CR, we first adopted a method pro-posed by Demetriou and Atwell (1994) to score the semantic coherence of alternative sentence interpretations against graphs based on the Longman Dictionary of Contemporary English (LDOCE). To construct the graph the dictionary lemmata were represented as nodes in an isa hierarchy and their semantic relations were represented as edges, which were extracted automatically from the LDOCE.", |
| "cite_spans": [ |
| { |
| "start": 121, |
| "end": 148, |
| "text": "Demetriou and Atwell (1994)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Scoring Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "As defined by Demetriou and Atwell (1994) ,", |
| "cite_spans": [ |
| { |
| "start": 14, |
| "end": 41, |
| "text": "Demetriou and Atwell (1994)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Scoring Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u00a1 \u00a3 \u00a4 \u00a4\u00a4\u00a4\u00a5 \u00a7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Scoring Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "is the set of direct relations (both isa and semantic relations) that can connect two nodes (concepts); and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Scoring Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u00a1 \u00a2 ! \u00a3 \" ! # \u00a4\u00a4\u00a4! \u00a5 \u00a7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Scoring Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "is the set of corresponding weights, where the weight of each isa relation is set to can be given as: , i.e.:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Scoring Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "( B A \u00a1 \u00a5 C \u00a9 E D F \u00a3 H G \u00a9 I ! # \u00a9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Scoring Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "4 \u00a1 \u00a9 \u00a1 & % 9 \u00a1 8 \u00a2 \u00a1 4 ( A \" 9 3 \u00a1 $ \u00a4 \u00a3 \u00a4\u00a4\u00a4 ) 8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Scoring Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The algorithm selects from the set of all paths between two concepts the one with the smallest weight, i.e. the cheapest. The distances between all concept pairs in CR are summed up to a total score. The set of concepts with the lowest aggregate score represents the combination with the highest semantic relatedness. Demetriou and Atwell (1994) do not provide concrete evaluation results for the method. Also, their algorithm only allows for a relative judgment stating which of a set of interpretations given a single sentence is more semantically related.", |
| "cite_spans": [ |
| { |
| "start": 318, |
| "end": 345, |
| "text": "Demetriou and Atwell (1994)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Scoring Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Since our objective is to compute semantic coherence scores of arbitrary CRs on an absolute scale, certain extensions are necessary. In this application, the CRs to be scored can differ in terms of their content, the number of concepts contained therein and their mappings to the original SRH. Moreover, in order to achieve absolute values, the final score should be related to the number of concepts in an individual set and the number of words in the original SRH. Therefore, the results must be normalized in order to allow for evaluation, comparability and clearer interpretation of the semantic coherence scores.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Scoring Algorithm", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We modified the algorithm described above to make it applicable and evaluatable with respect to the task at hand as well as other possible tasks. The basic idea is to calculate a score based on the path distances in . This maximum value can also serve as a maximum for long distances and can thus help to prune the search tree for long paths. This constant has to be set according to the structure of the knowledge base. For example, employing the ontology described above, the maximum distance between two concepts does not exceed ten and we chose in that case", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring Concept Representations", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "1 \u00a7 \u00a6 \u00a9 \u00a1 $ \u00a3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring Concept Representations", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": ". We can now define the semantic coherence score for \u00a5 as the average path length between all concept pairs in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring Concept Representations", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u00a5 : 4 \u00a5 9 \u00a1 ! # \" % $ & ( ' D 4 \u00a1 \u00a6 \u00a9 \u00a1 % 9 ) \u00a5 ) 1 0 ) \u00a5 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring Concept Representations", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Since the ontology is a directed graph, we have between a given pair of concepts, regardless of the direction, is taken into account.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring Concept Representations", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Given the algorithm proposed above, a significant number of misclassifications for SRHs would result from the cases when an SRH contains a high proportion of function words (having no conceptual mappings in the resulting CR) and only a few content words. Let's consider the following example: The corresponding CR is constituted out of a single concept Information Search Process. ON TOSCORE would classify the CR as coherent with the highest possible score, as this is the only concept in the set. This, however, would often lead to misclassifications. We, therefore, included a post-processing technique that takes the relation between the number of ontology concepts 7 in a given CR and the total number of words 7 9 8 in the original SRH into account. This relation is defined by the ratio @ \u00a1 A 7 \u00a9 B 7 C 8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word/Concept Relation", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": ". ONTOSCORE automatically classifies an SRH as being incoherent irrespective of its semantic coherence score, if @ is less then the threshold set. The threshold may be set freely. The corresponding findings are presented in the evaluation section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word/Concept Relation", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "Looking at an example of ONTOSCORE at work, we will examine the utterance given in Example (1). They are converted into a graph. According to the algorithm shown in Section 4.3, all paths between the concepts of each graph are calculated and weighted. This yields the following non-1 \u00a7 \u00a6 \u00a9 paths:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ONTOSCORE at Work", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "\u00a5 \u00a3 4 \u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a7 \u00a6 \u00a9 \u00a5 \u00a7 \u00a6 \u00a5 \u00a5 \u00a4 \u00a3 \u00a2 \u00a5 \u00a3 \u00a6 \u00a5 ! ! \u00a5 \" ! \u00a5 $ # 9 \u00a1 7 $", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ONTOSCORE at Work", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "via the relation has-watcher;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ONTOSCORE at Work", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "4 \u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a7 \u00a6 \u00a9 \u00a5 \u00a7 \u00a6 \u00a5 \u00a5 \u00a4 \u00a3 \u00a2 \u00a5 \u00a3 \u00a6 \u00a5 ! ! & % \u00a2 $ 9 \u00a1 7 $", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ONTOSCORE at Work", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "via the relation has-watchable object.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ONTOSCORE at Work", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "\u00a5 4 ' \u00a2 \u00a5 \u00a4 ) ( 0 # 1 2 \u00a3 \u00a6 \u00a5 ! ! \" ! $ $ # 9 \u00a1 $", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ONTOSCORE at Work", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "via the relation has-agent;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ONTOSCORE at Work", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "The ensuing results are:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ONTOSCORE at Work", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "According to . To allow for a binary classification into semantically coherent vs. incoherent samples, a cutoff threshold must be set. The results of the corresponding experiments will be presented in Section 5.2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ONTOSCORE at Work", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "Due to lexical ambiguity, the process of transforming an n-best list of SRH to concept representations often results in a set of CRs that is greater than 1, i.e. a given SRH could be transformed into a set of CRs", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Disambiguation", |
| "sec_num": "4.7" |
| }, |
| { |
| "text": "\u00a5 \u00a3 , ..., \u00a5 \u00a5 B \u00a7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Disambiguation", |
| "sec_num": "4.7" |
| }, |
| { |
| "text": ". Word sense disambiguation could, therefore, also independently be performed using the semantic coherence scoring described herein as an additional application of our approach. However, that has not been investigated thoroughly yet.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Disambiguation", |
| "sec_num": "4.7" |
| }, |
| { |
| "text": "For example, lexicon entries for the words: and corresponding final scores:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Disambiguation", |
| "sec_num": "4.7" |
| }, |
| { |
| "text": "I -", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Disambiguation", |
| "sec_num": "4.7" |
| }, |
| { |
| "text": "U W V ' 8 @ 9 B A Y X \u00e0 c b ; U W V ' 8 @ 9 F X \u00e0 H d f e g i h ; U W V ' 8 @ 9 I X \u00e0 H p f e q \u00a9 d ; U W V ' 8 @ 9 P R r X \u00e0 H p ; U W V ' 8 @ 9 G S s X \u00e0 H d f e d ; U W V ' 8 @ 9 P T r X \u00e0 u t w v B Q x y s ;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Disambiguation", |
| "sec_num": "4.7" |
| }, |
| { |
| "text": "The examination of the resulting scores allows us to conclude that are much less coherent and may, thus, be considered inadequate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Disambiguation", |
| "sec_num": "4.7" |
| }, |
| { |
| "text": "The ONTOSCORE software runs as a module in SMARTKOM (Wahlster et al., 2001 ), a multi-modal and multi-domain spoken dialogue system. The system features the combination of speech and gesture as its input and output modalities. The domains of the system include cinema and TV program information, home electronic device control, mobile services for tourists, e.g. tour planning and sights information.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 74, |
| "text": "(Wahlster et al., 2001", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Context", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "ONTOSCORE operates on n-best lists of SRHs produced by the language interpretation module out of the ASR word graphs. It computes a numerical ranking of alternative SRH and thus provides an important aid to the understanding component of the system in determining the best SRH. The ONTOSCORE software employs two knowledge sources, an ontology (about 730 concepts and 200 relations) and a word/concept lexicon (ca. 3.600 words), covering the respective domains of the system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Context", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The evaluation of ONTOSCORE was carried out on a dataset of 2.284 SRHs. We reformulated the problem of measuring the semantic coherence in terms of classifying the SRHs into two classes: coherent and incoherent. To our knowledge, there exists no similar software performing semantic coherence scoring to be used for com-parison in this evaluation. Therefore, we decided to use the results from human annotation (s. Section 2.2) as the baseline.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "A gold standard for the evaluation of ONTOSCORE was derived by the annotators agreeing on the correct solution in cases of disagreement. This way, we obtained 1.246 (54.55%) SRH classified as coherent by humans, which is also assumed to be the baseline for this evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Additionally, we performed an inverse linear transformation of the scores (which range from 1 to 1 1 \u00a6\u00a8 ) , so that the output produced by ONTOSCORE is a score on the scale from 0 to 1, where higher scores indicate greater coherence. In order to obtain a binary classification of SRHs into coherent versus incoherent with respect to the knowledge base, we set a cutoff thresh old. The dependency graph of the threshold value and the results of the program in % is shown in Figure 1 . The best results are achieved with the threshold 0.29. With this threshold, ONTOSCORE correctly classifies 1.487 SRH, i.e. 65.11% in the evaluation dataset (the word/concept relation is not taken into account at this point). Figure 3 shows the dependency graph between @ , representing the threshold for the word/concept relation and the results of ONTOSCORE, given the best cutoff threshold for the classification (i.e. 0.29) derived in the previous experiments.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 473, |
| "end": 481, |
| "text": "Figure 1", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 709, |
| "end": 717, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The best results are achieved with the @ \u00a1 \u00a3 \u00a5 \u00a4 \u00a1 . In other words, the proportion of concepts vs. words must be no less than 1 to 3. Under these settings, ONTOSCORE correctly classifies 1.672 SRH, i.e. 73.2% in the evaluation dataset. This way, the technique brings an additional improvement of 8.09% as compared to initial results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The ONTOSCORE system described herein automatically performs ontology-based scoring of sets of concepts con- Figure 3 : Finding the optimal threshold for the word/concept relation stituting an adequate representation of speech recognition hypotheses. To date, the algorithm has been implemented in a software which is employed by a multidomain and multi-modal dialogue system and applied to the task of scoring n-best lists of SRH, thus producing a score expressing how well a given SRH fits within the domain model. For this task, it provides an alternative knowledge-based score next to the ones provided by the ASR and the NLU system. In the evaluation of our system we employed an ontology that was not designed for this task, but already existed as the system's internal knowledge representation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 109, |
| "end": 117, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6" |
| }, |
| { |
| "text": "As future work we will examine how the computation of a discourse dependent semantic coherence score, i.e. how well a given SRH fits within domain model with respect to the previous discourse, can improve the overall score. Additionally, we intend to calculate the semantic coherence score with respect to individual domains of the system, thus enabling domain recognition and domain change detection in complex multi-modal and multi-domain spoken dialogue systems. Currently, we are also beginning to investigate whether the proposed method can be applied to scoring sets of potential candidates for resolving the semantic interpretation of ambiguous, polysemous and metonymic language use.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6" |
| }, |
| { |
| "text": "All examples are displayed with the German original on top and a glossed translation below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "As the numbers evident from large vocabulary speech recognition performance(Cox et al., 2000), the occurrence of less well formed and incoherent SRHs increases the more conversational a system becomes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Alternative knowledge representations, such as WORD-NET, could have been employed in theory as well, however most of the modern domains of the system, e.g. electronic media or program guides, are not covered by WORDNET.4 DAML+OIL and OWL are frequently used knowledge modeling languages originating in W3C and Semantic Web", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work has been partially funded by the German Federal Ministry of Research and Technology (BMBF) as part of the SmartKom project under Grant 01 IL 905C/0 and by the Klaus Tschira Foundation. We would like to thank Michael Strube for his helpful comments on the previous versions of this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The Formal Foundations Underlying Overlay", |
| "authors": [ |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Alexandersson", |
| "suffix": "" |
| }, |
| { |
| "first": "Tilman", |
| "middle": [], |
| "last": "Becker", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the Fifth International Workshop on Computational Semantics (IWCS-5)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jan Alexandersson and Tilman Becker. 2003. The For- mal Foundations Underlying Overlay. In Proceedings of the Fifth International Workshop on Computational Semantics (IWCS-5), Tilburg, The Netherlands, Febru- ary.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "An architecture for more realistic conversational system", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [ |
| "F" |
| ], |
| "last": "Allen", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Ferguson", |
| "suffix": "" |
| }, |
| { |
| "first": "Amanda", |
| "middle": [], |
| "last": "Stent", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of Intelligent User Interfaces", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James F. Allen, George Ferguson, and Amanda Stent. 2001. An architecture for more realistic conversational system. In Proceedings of Intelligent User Interfaces, pages 1-8, Santa Fe, NM.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Natural Language Understanding", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [ |
| "F" |
| ], |
| "last": "Allen", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James F. Allen. 1987. Natural Language Understanding. Menlo Park, Cal.: Benjamin Cummings.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The Berkeley FrameNet Project", |
| "authors": [ |
| { |
| "first": "Collin", |
| "middle": [ |
| "F" |
| ], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [ |
| "J" |
| ], |
| "last": "Fillmore", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "B" |
| ], |
| "last": "Lowe", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of COLING-ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proceedings of COLING-ACL, Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Assessing agreement on classification tasks: The kappa statistic", |
| "authors": [ |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Carletta", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Computational Linguistics", |
| "volume": "22", |
| "issue": "2", |
| "pages": "249--254", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean Carletta. 1996. Assessing agreement on classifi- cation tasks: The kappa statistic. Computational Lin- guistics, 22(2):249-254.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Introduction to Algorithms", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [ |
| "E" |
| ], |
| "last": "Cormen", |
| "suffix": "" |
| }, |
| { |
| "first": "Ronald", |
| "middle": [ |
| "R" |
| ], |
| "last": "Leiserson", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rivest", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas H. Cormen, Charles E. Leiserson, and Ronald R. Rivest. 1990. Introduction to Algorithms. MIT press, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Speech and language processing for next-millenium communications services. Proceedings of the IEEE", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [ |
| "V" |
| ], |
| "last": "Cox", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "A" |
| ], |
| "last": "Kamm", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [ |
| "R" |
| ], |
| "last": "Rabiner", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Schroeter", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "G" |
| ], |
| "last": "Wilpon", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "88", |
| "issue": "", |
| "pages": "1314--1334", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R.V. Cox, C.A. Kamm, L.R. Rabiner, J. Schroeter, and J.G. Wilpon. 2000. Speech and language process- ing for next-millenium communications services. Pro- ceedings of the IEEE, 88(8):1314-1334.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A semantic network for large vocabulary speech recognition", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Demetriou", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Atwell", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of AISB workshop on Computational Linguistics for Speech and Handwriting Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Demetriou and Eric Atwell. 1994. A seman- tic network for large vocabulary speech recognition. In Lindsay Evett and Tony Rose, editors, Proceed- ings of AISB workshop on Computational Linguistics for Speech and Handwriting Recognition, University of Leeds.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "SPIN: Language understanding for spoken dialogue systems using a production system approach", |
| "authors": [ |
| { |
| "first": "Ralf", |
| "middle": [], |
| "last": "Engel", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of ICSLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ralf Engel. 2002. SPIN: Language understanding for spoken dialogue systems using a production system ap- proach. In Proceedings of ICSLP 2002.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Annotating the semantic consistency of speech recognition hypotheses", |
| "authors": [ |
| { |
| "first": "Iryna", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Porzel", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Strube", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the Third SIGdial Workshop on Discourse and Dialogue", |
| "volume": "", |
| "issue": "", |
| "pages": "46--49", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iryna Gurevych, Robert Porzel, and Michael Strube. 2002. Annotating the semantic consistency of speech recognition hypotheses. In Proceedings of the Third SIGdial Workshop on Discourse and Dialogue, pages 46-49, Philadelphia, USA, July.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Automatic detection of poor speech recognition at the dialogue level", |
| "authors": [ |
| { |
| "first": "Diane", |
| "middle": [ |
| "J" |
| ], |
| "last": "Litman", |
| "suffix": "" |
| }, |
| { |
| "first": "Marilyn", |
| "middle": [ |
| "A" |
| ], |
| "last": "Walker", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "S" |
| ], |
| "last": "Kearns", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "309--316", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diane J. Litman, Marilyn A. Walker, and Michael S. Kearns. 1999. Automatic detection of poor speech recognition at the dialogue level. In Proceedings of the 37th Annual Meeting of the Association for Com- putational Linguistics, College Park, Md., 20-26 June 1999, pages 309-316.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Word graphs: An efficient interface between continuousspeech recognition and language understanding", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Oerder", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "119--122", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martin Oerder and Hermann Ney. 1993. Word graphs: An efficient interface between continuous- speech recognition and language understanding. In ICASSP Volume 2, pages 119-122.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Artificial Intelligence. A Modern Approach", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Stuart", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Russell", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Norvig", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stuart J. Russell and Peter Norvig. 1995. Artificial In- telligence. A Modern Approach. Prentice Hall, Engle- wood Cliffs, N.J.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "The n-best algorithm: an efficient and exact procedure for finding the n most likely sentence hypotheses", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Chow", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Proceedings of ICASSP'90", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Schwartz and Y. Chow. 1990. The n-best algo- rithm: an efficient and exact procedure for finding the n most likely sentence hypotheses. In Proceedings of ICASSP'90, Albuquerque, USA.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A word graph based n-best search in continuous speech recognition", |
| "authors": [ |
| { |
| "first": "B-H", |
| "middle": [], |
| "last": "Tran", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Seide", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Steinbiss", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Chow", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of ICSLP'96", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B-H. Tran, F. Seide, V. Steinbiss, R. Schwartz, and Y. Chow. 1996. A word graph based n-best search in continuous speech recognition. In Proceedings of ICSLP'96.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Smartkom: Multimodal communication with a life-like character", |
| "authors": [ |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Wahlster", |
| "suffix": "" |
| }, |
| { |
| "first": "Norbert", |
| "middle": [], |
| "last": "Reithinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Anselm", |
| "middle": [], |
| "last": "Blocher", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 7th European Conference on Speech Communication and Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "1547--1550", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wolfgang Wahlster, Norbert Reithinger, and Anselm Blocher. 2001. Smartkom: Multimodal communica- tion with a life-like character. In Proceedings of the 7th European Conference on Speech Communication and Technology, pages 1547-1550.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Towards developing general model of usability with PARADISE. Natural Language Engeneering", |
| "authors": [ |
| { |
| "first": "Marilyn", |
| "middle": [ |
| "A" |
| ], |
| "last": "Walker", |
| "suffix": "" |
| }, |
| { |
| "first": "Candace", |
| "middle": [ |
| "A" |
| ], |
| "last": "Kamm", |
| "suffix": "" |
| }, |
| { |
| "first": "Diane", |
| "middle": [ |
| "J" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "6", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marilyn A. Walker, Candace A. Kamm, and Diane J. Lit- man. 2000. Towards developing general model of us- ability with PARADISE. Natural Language Engeneer- ing, 6.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "text": "Figure 1): General Process, a set of the most general processes such as duplication, imitation or repetition processes; Mental Process, a set of processes such as cognitive, emotional or perceptual processes; Physical Process, a set of processes such as motion, transaction or controlling processes; Social Process, a set of processes such as communication or instruction processes. Let us consider the definition of the Information Search Process in the ontology. It is modeled as a projects. For more detail, see www.w3c.org. subclass of the Cognitive Process, which is a subclass of the Mental Process and inherits the following slot constraints: begin time, a time expression indicating the starting time point; end time, a time expression indicating the time point when the process is complete; state, one of the abstract process states, e.g. start, continue, interrupt, etc.; cognizer, filled with a class Person including its subclasses.", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF2": { |
| "text": "Upper part of the process hierarchy.", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF8": { |
| "text": "According to the concept entries in the lexicon, the SRHs are transformed into two alternative concept representations. As no ambiguous words are found in this example, Person; Map; Parting Process \u00a7 .", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF9": { |
| "text": "constitutes the most semantically coherent representation of the initial SRH,", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF10": { |
| "text": "Finding the optimal threshold for the coherent versus incoherent classification", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF1": { |
| "text": "Information Search Process, which has an agent of type User and a piece of information of type Sight. Sight has a name of type Castle. Analogously, the utterance:", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>(4) I</td><td colspan=\"3\">h\u00e4tte gerne</td><td colspan=\"2\">Informationen</td><td>zum</td><td>Schloss</td></tr><tr><td>I</td><td colspan=\"2\">would like</td><td/><td colspan=\"2\">information</td><td>about</td><td>castle</td></tr><tr><td colspan=\"5\">can also be mapped onto (5) Wie kann ich den</td><td colspan=\"2\">Fernseher</td><td>steuern</td></tr><tr><td colspan=\"2\">How</td><td>can</td><td>I</td><td>the</td><td>TV</td><td>control</td></tr><tr><td colspan=\"7\">can be mapped onto Information Search</td></tr><tr><td colspan=\"7\">Process, which has an agent of type User and</td></tr><tr><td colspan=\"7\">has a piece of information of type Controlling Tv</td></tr><tr><td colspan=\"5\">Device Process.</td><td/></tr></table>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |