ACL-OCL / Base_JSON /prefixS /json /sigdial /2007.sigdial-1.19.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:50:49.352802Z"
},
"title": "Corpus-Based Training of Action-Specific Language Models",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Schillingmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bielefeld University",
"location": {
"postCode": "33615",
"settlement": "Bielefeld",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Sven",
"middle": [],
"last": "Wachsmuth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bielefeld University",
"location": {
"postCode": "33615",
"settlement": "Bielefeld",
"country": "Germany"
}
},
"email": "swachsmu@techfak.uni-bielefeld.de"
},
{
"first": "Britta",
"middle": [],
"last": "Wrede",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bielefeld University",
"location": {
"postCode": "33615",
"settlement": "Bielefeld",
"country": "Germany"
}
},
"email": "bwrede@techfak.uni-bielefeld.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Especially in noisy environments like in human-robot interaction, visual information provides a strong cue facilitating a robust understanding of speech. In this paper, we consider the dynamic visual context of actions perceived by a camera. Based on an annotated multi-modal corpus of people who verbally explain tasks while they perform them, we present an automatic strategy for learning action-specific language models. The approach explicitly deals with the asynchrony of actions and verbal descriptions and includes an automatic parameter optimization based on a perplexity measure. Results show that a significant improvement of the word accuracy can be achieved using a dynamic switching of action-specific language models.",
"pdf_parse": {
"paper_id": "2007",
"_pdf_hash": "",
"abstract": [
{
"text": "Especially in noisy environments like in human-robot interaction, visual information provides a strong cue facilitating a robust understanding of speech. In this paper, we consider the dynamic visual context of actions perceived by a camera. Based on an annotated multi-modal corpus of people who verbally explain tasks while they perform them, we present an automatic strategy for learning action-specific language models. The approach explicitly deals with the asynchrony of actions and verbal descriptions and includes an automatic parameter optimization based on a perplexity measure. Results show that a significant improvement of the word accuracy can be achieved using a dynamic switching of action-specific language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "While speech recognition is an easy task for humans even under difficult acoustic conditions, current ASR systems still cannot compete with humans (Potamianos et al., 2003) . This is especially true in human-robot interaction, where one has to deal with spontaneous speech effects, noisy environments, communicative gestures, and a frequent referencing to visual objects and events. In this case, speech recognition and understanding becomes a multi-modal issue. This has also been emphasized by several psychological studies that suggest a very early interaction between vision and speech processing (Spivey et al., 2001) . For the practical development of speech understanding components for robotic interfaces, there are three implications. First, there is a need for multi-modal corpora in order to train and evaluate more sophisticated speech recognition models. Secondly, visual and acoustic speech events need to be synchronized and aligned with regard to semantic content for learning as well as interpretation. Thirdly, new strategies for the early integration of visual information into the speech recognition process need to be developed. In this paper, we focus on the first and second issues and show first results for the third.",
"cite_spans": [
{
"start": 147,
"end": 172,
"text": "(Potamianos et al., 2003)",
"ref_id": "BIBREF8"
},
{
"start": 601,
"end": 622,
"text": "(Spivey et al., 2001)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The integration of speech and visual context can be treated on different levels of processing that depend on the kind visual information considered. Motivated by the McGurk effect (1976) audiovisual speech recognition (AVSR) systems have been developed. These systems integrate acoustic features with those extracted from the speakers face. This is an approximately synchronous process during speech production. In AVSR, typically Hidden Markov Models (HMMs) are used for modelling the acoustic and visual features. The approaches mostly differ in the handling of slight asynchrony between the two feature streams. The methods range from simple feature concatenation which does not allow asynchrony at all up to more flexible HMM architectures (e.g. Product-HMMs) allowing ca. 100 ms of asynchrony in practice (Potamianos et al., 2003) .",
"cite_spans": [
{
"start": 810,
"end": 835,
"text": "(Potamianos et al., 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Other systems proposed integrate features from a static visual scene into speech recognition. Knowledge inferred from a visual scene can be used to generate grammars for object descriptions (Naeve et al., 1995) . These grammars are used as language model to improve speech recognition. Deb Roy (2005) reports a system, which fuses knowledge of the visual semantics of language and the specific contents of a visual scene during speech processing. Based on the current scene layout the system generates possible word sequences for object descriptions from a probabilistic grammar. These are weighted by a likelihood associated with each object in the scene. The result is a bi-gram model, which is dynamically updated using a visual attention mechanism incorporating the partially processed utterance. This model is used to bias speech recognition. Both approaches have in common that the scene information remains static during speech processing. Thus, the synchronization problem can be neglected and the integration is done on the level of utterances. In this case also late integration schemes are possible that infer a joint multi-modal meaning after a word sequence has been recognized (Wachsmuth and Sagerer, 2002) .",
"cite_spans": [
{
"start": 190,
"end": 210,
"text": "(Naeve et al., 1995)",
"ref_id": "BIBREF7"
},
{
"start": 290,
"end": 300,
"text": "Roy (2005)",
"ref_id": "BIBREF10"
},
{
"start": 1191,
"end": 1220,
"text": "(Wachsmuth and Sagerer, 2002)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The timing and synchronization becomes relevant when dynamic visual events are considered as visual context. Two different cases can be distinguished. On the one hand, communicative gestures like pointing provide information that is directly related to the syntactic structure of the sentence. As a consequence, these are approximately synchronized with the corresponding noun phrases and partially marked in the wording. In this area, different research groups have started to collect multimodal corpora (Green et al., 2006; Wolf and Bugmann, 2005; Maas and Wrede, 2006) . However, in these settings, the scene environment is still static and the kind of visual information provided is of limited use in speech recognition.",
"cite_spans": [
{
"start": 505,
"end": 525,
"text": "(Green et al., 2006;",
"ref_id": "BIBREF2"
},
{
"start": 526,
"end": 549,
"text": "Wolf and Bugmann, 2005;",
"ref_id": "BIBREF13"
},
{
"start": 550,
"end": 571,
"text": "Maas and Wrede, 2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, human actions or action sequences that are verbally commented are the most informative but also most flexible case. Usable corpora for speech recognition training as well as evaluation are still rare. Integrating this information into speech recognition broaches two problems. First, humans do not execute actions synchronously while describing a task verbally. The degree of asynchrony lays in a range of several seconds as reported in (Wolf and Bugmann, 2006) . Hence, it is not possible to integrate this information using HMM architectures as used in AVSR. Second, the actions change in the course of an utterance. Thus, the contextual information is not static as in the previous systems utilizing visual scene contents.",
"cite_spans": [
{
"start": 456,
"end": 480,
"text": "(Wolf and Bugmann, 2006)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a corpus-based method for training and optimising action-specific language models. The goal is to improve recognition accuracy by using these models during speech processing. Training data for the language models is collected using a scenario described in section 2. Section 3 describes our method of associating utterance parts to actions. The resulting action-specific training data is used in an automated language model training and optimisation process. The results of this process are discussed in section 4. Our scenario resembles a situation in which a user teaches a new task to a robotic system. A test subject sits in front of a table with several objects (e.g. a cup and a plant) on it that can be utilized for different manipulative actions (Figure 1 ). Only a subset of the objects is relevant for the following demonstration. The subject is instructed to explain some simple tasks to the system while performing the corresponding action sequence. In order to suppress deictic gestures and too complex descriptions they have to imagine, that their communication partner is intelligent and knows the setup. The tasks are watering a plant, preparing tea and preparing coffee. In order to generate more varying utterances the test subjects have to perform each task twice with three different object layouts. The second time they are additionally instructed to name colours and object relationships if possible. The utterances are recorded using a headset microphone and the scene is recorded by video. A corpus is collected containing the utterance transcriptions and time intervals, which annotate the actions. The actions performed are annotated in the video based on an abstraction hierarchy as depicted in Figure 2 ). The choice of the compositional granularity was based on two reasons. First, the corresponding primitives can be detected using a pre-trained trajectory based action recogniser (Li et al., 2006) . Secondly, the verbalization happened on that level due to the instructions given.",
"cite_spans": [
{
"start": 1937,
"end": 1954,
"text": "(Li et al., 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 780,
"end": 789,
"text": "(Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1748,
"end": 1756,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The resulting corpus consists of 195 utterances from 11 test subjects (17.7 utterances per person). The overall length is about 38 minutes. The average utterance length is about 12.7 s with about 33 words per utterance. The entire corpus includes 6 429 words with a lexicon size of 288 different words. The videos are annotated with 11 different actions. The average length of an action interval is 1.75 s. All in all 999 intervals with an overall length of about 29 minutes have been annotated. Each utterance contains 5.5 actions in average. The following section describes how action-specific language models are created using this corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scenario and data collection",
"sec_num": "2"
},
{
"text": "Speech recognition models are typically formulated distinguishing acoustic and language models. The standard technique for language models are n-grams that have proven their effectiveness over many years (Rosenfeld, 2000) . For acquiring realistic language models, n-grams need to be trained using a representative sample. In the present approach, we assume that the wording will be biased by the action, which the speaker performs and describes in parallel. Thus, we aim at the estimation of actionspecific language models. In order to gain corresponding action-specific samples two problems need to be solved. First, a method is required, which is able to associate speech with action intervals in order to extract action-specific parts from an utterance. Secondly, our approach requires temporal information (word intervals) for both the actions and the speech. The utterance transcriptions from the above-described corpus are not annotated with temporal information in contrast to the video annotation. Manual annotation on that level of detail is expensive. Thus, we use an automated approach, which is described in the next section. Afterwards we elaborate on our approach to the first problem.",
"cite_spans": [
{
"start": 204,
"end": 221,
"text": "(Rosenfeld, 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Action-Specific Language Models",
"sec_num": "3"
},
{
"text": "The temporal information of an utterance with a known transcription can be gained by using a socalled forced alignment. Our speech recogniser (Fink, 1999) uses Hidden Markov Models (HMMs) as acoustic models. Existing models trained on a speech corpus are used. Words not in the lexicon are defined by new compound models based on phoneme HMMs. In a forced alignment, the model topology is restricted in accordance with each utterance transcription. This means the order of word models is fixed for each transcription ensuring a correct alignment although the acoustic quality varies depending on the speaker. Since the transcription does not contain pauses or spontaneous speech effects, the model topology needs to be adapted accordingly. An \"<other>\" model for these effects is optionally allowed between words. each utterance, a sequence of MFCC feature vectors is extracted following standard speech recognition techniques. The Viterbi algorithm is used to calculate the state sequence s through the model topology which produces the feature vector sequence o with the maximum probability given the HMM \u03bb:",
"cite_spans": [
{
"start": 142,
"end": 154,
"text": "(Fink, 1999)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gaining Time Information",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s * = argmax s P (o, s|\u03bb)",
"eq_num": "(1)"
}
],
"section": "Gaining Time Information",
"sec_num": "3.1"
},
{
"text": "After the Viterbi alignment, the resulting state sequence can be used to calculate the time interval for each word since the frame length used during feature extraction is known. After this step, the temporal information is available for both the utterance transcription and the action annotation. The following section explains the next step where the temporal information is used to associate utterance parts with actions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaining Time Information",
"sec_num": "3.1"
},
{
"text": "The main problem when speech has to be associated with action intervals is that the utterance parts semantically belonging to actions are asynchronous on the time-line (Wolf and Bugmann, 2006) . Thus, a distance measure d(w i , a j ) is calculated between each word w i and action a j . A set of tolerance parameters is used to decide if a word is assigned to an action. By choosing these parameters appropriately, the asynchrony between speech and actions can be respected. Since the time shift is not longer than several seconds this procedure is suitable. Multiple cases have to be handled when calculating with temporal intervals, which are systematically structured by Allen's calculus (Allen, 1983) . Our method uses a subset of these relationships. Each type of action uses independent tolerance parameters to the left h l j and the right h r j . They are used depending if w i is before or after a j respectively. Pauses detected during the forced alignment give hints about the change of an action. Thus, silence is weighted additionally using a penalty parameter g j so that silence between an action and a word further increases the temporal difference. Figure 4 illustrates the distance measure when silence has to be considered.",
"cite_spans": [
{
"start": 168,
"end": 192,
"text": "(Wolf and Bugmann, 2006)",
"ref_id": "BIBREF14"
},
{
"start": 691,
"end": 704,
"text": "(Allen, 1983)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1165,
"end": 1173,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pairing of Speech and Actions",
"sec_num": "3.2"
},
{
"text": "w i t 1 t 2 t 4 t 3 a j t 5 t 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pairing of Speech and Actions",
"sec_num": "3.2"
},
{
"text": "<silence> Figure 4 : The distance function between two word intervals under the above constellation is defined as",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 18,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pairing of Speech and Actions",
"sec_num": "3.2"
},
{
"text": "d(w i , a j ) = t 3 \u2212 t 2 + g j \u2022 (t 3 \u2212 t 5 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pairing of Speech and Actions",
"sec_num": "3.2"
},
{
"text": "A word is associated with an action if the following condition is true: Figure 5 gives a simple example about the assignment strategy. The tolerance parameters are deter-mined automatically and individually for each language model using an optimisation method, which is described in section 3.4.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 80,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Pairing of Speech and Actions",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212h r j < d(w i , a j ) < h l j",
"eq_num": "(2)"
}
],
"section": "Pairing of Speech and Actions",
"sec_num": "3.2"
},
{
"text": "The objective of the language model training is to create a n-gram-model for each action type, which predicts the action-specific utterance parts most accurately. These models could directly be trained with the results of the above assignment strategy but it is likely that these models become too specific. Therefore, the training data is structured using the hierarchy defined in figure 2. The top level refers to the complete utterance. The second level addresses utterance parts on a more general action level e.g. \"take\" or \"put\". The third level reaches the highest level of granularity with action-object specific utterance parts. During training each level can be weighted using an individual factor (see figure 6 ). Figure 6 : Structure of the training data using the action hierarchy. The highlighted path shows by example, which parts are used and weighted to train one language model. guage model. Thus, each language model has an individual degree of specialisation depending on these factors. The training data required in this process is generated using the speech and action pairing process with an individual parameter set. Both the pairing parameters and the weighting factors are optimised specifically for each language model using a method described in the following section. During model estimation, absolute discounting and backing-off are used to handle unseen events. The counts c(yz) of a word z with history y are modified with an absolute value \u03b2 in order to gain probability mass for unseen events so that the relative frequencies are defined as: . Assuming pourinwater has a tolerance of 0.5 s to the left and 0 s to the right the part \"Tasse und gie\u00dfe damit die Pflanze links\" is assigned to this action.",
"cite_spans": [],
"ref_spans": [
{
"start": 713,
"end": 721,
"text": "figure 6",
"ref_id": null
},
{
"start": 725,
"end": 733,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Model Training",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f * (z|y) = c(yz) \u2212 \u03b2 c(y\u2022) \u2200yz c(yz) > \u03b2",
"eq_num": "(3)"
}
],
"section": "Language Model Training",
"sec_num": "3.3"
},
{
"text": "In the above sections, we have introduced several parameters. The tolerance parameters and the penalty factors for silence sum up to 33 in total considering all 11 action types. In addition, the weighting factors in the training data structure count 33 in total. This large number of free parameters cannot efficiently be determined manually. Thus, we use an optimisation method, which uses the perplexity to measure the quality of the action-specific models. We firstly describe the method in general and go into detail in the next paragraph. In order to compute the perplexity a test sample is required. Since our corpus is relatively small, the choice of the test sample has large influence on the perplexity. Therefore the perplexity is computed using a leave-one-out cross validation (Kohavi, 1995) . The utterances of one person are used as testing data on each run; the others are used for training. Firstly, a parameter set with the above parameters is generated. This parameter set is used to train language models with the method described in the last two sections. The testing data is gained using the same parameter set. Secondly, the perplexity is computed for each excluded test subject. The average perplexity regarding an action-specific language model is the final measurement of this model and the underlying parameter set. Thus, a parameter optimisation also finds the tolerance parameters for speech action assignment. The asynchrony between speech and actions is respected this way. This method depends on the assumption that actions frame semantic units, which are verbalised similarly. Therefore, a correct assignment of speech to actions results in a better perplexity rating.",
"cite_spans": [
{
"start": 789,
"end": 803,
"text": "(Kohavi, 1995)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Optimisation",
"sec_num": "3.4"
},
{
"text": "In detail, the optimisation is realised by evaluating a large number of parameter sets automatically. The tolerance parameters to the left and the right are varied in a range from 0 to 3 seconds using an increment 0.5. The silence penalty is varied in a range from 0 to 2 analogously. The training data is weighted zero or once on utterance level. The action-level weighting is varied between 0 and 5. On the action-object level, weighting factors from 1 to 10 have been explored. We have chosen 12 sets of these factors in order to evaluate models with different degrees of specialisation. All combinations of these parameters result in 2 892 different sets. Each one is used to generate a complete set of action-specific bi-gram language models. Unseen events are handled using absolute discounting with \u03b2 = 0.8. Due to the large number of parameter sets and the resulting complexity, this factor has not been made subject to optimisation. Furthermore, the discounting factor has insignificant influence regarding this method as informal tests have shown.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Optimisation",
"sec_num": "3.4"
},
{
"text": "After the action-specific language models have been created the perplexity is computed so that each combination of language model and the underlying parameter set is associated with one. This way the perplexity can be used as optimisation criterion to find the best language model for each type of action. In the following section we present first results gathered using these models during speech processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Optimisation",
"sec_num": "3.4"
},
{
"text": "The language models' quality is evaluated by assessing the corresponding speech recognition performance. Our speech recogniser uses a standard time synchronous integrated search strategy to weight hypotheses generated by the acoustic model additionally with the language model. We have implemented a strategy, which enables the speech recogniser to switch language models during speech processing Table 2 : Comparison of the perplexity regarding the action-specific models against the perplexity using a standard bi-gram trained on the whole utterances. The language models are trained with utterance parts on action-object level only.",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 404,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "using a set of switch points. In our case these switch points are generated from the action annotation. Two strategies have been implemented. The stick strategy uses exactly the interval borders and a default model when no annotation is available e.g. between two intervals. The expand strategy expands each action interval as far as possible so that an action-specific model is always used. All results are computed using a leave-one-out cross validation as described in section 3.4. The audio data belonging to the excluded test subject for each run is used for evaluating the speech recognizer. Afterwards the word accuracy W ACC and the word correctness W CORR are calculated. In order to see how the degree of specialisation affects the recognition results it is possible to apply restrictions during optimisation. In the following, we Figure 7 : Overview of the average perplexity against word accuracy for all evaluation results. Models that are more specific have a lower perplexity. The difference between random and correct usage is larger for models that are more specific. The optimal results are slightly more specific than the standard bi-gram. Nonoptimal models with up to 80 % of the top rated models thrown away do not reach this result. The keywords expand and stick denote the switching strategy where expand means each action interval is expanded as much as possible.",
"cite_spans": [],
"ref_spans": [
{
"start": 841,
"end": 849,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Weighting Factors Utt. Ac. Ac.-Obj. take-cup 1 0 3 take-tea 1 0 1 take-sugar 1 0 3 take-milk 1 0 3 putdown-tea 1 0 5 putdown-cup 1 0 1 putdown-milk 1 1 10 pourin-tea 1 1 5 pourin-sugar 1 1 5 pourin-milk 1 0 5 pourin-water 1 0 3 present detailed results using very specialised models on the one hand and results where the degree of specialisation has also been made subject to optimisation on the other hand. The results are compared against recognition results using a standard bi-gram model trained on the complete utterance level (base result). Another comparison is made against results where an action-specific model is randomly selected for each action interval during speech recognition in order to evaluate their level of specialisation. Table 1 shows results using very specific models trained with utterance parts on action-object level only. The models are too specific since the results are less good than using a standard bi-gram model. The perplexity difference in table 2 shows that these models are much more specific to the action context than the standard bi-gram model. The random usage result confirms that parts not belonging to the corresponding action context are not well described by the model.",
"cite_spans": [],
"ref_spans": [
{
"start": 745,
"end": 752,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Action",
"sec_num": null
},
{
"text": "Since very specific models with a low perplexity do not improve recognition results restrictions are applied during optimisation. The results in table 3 are generated using language models, which have been trained using the utterance level always once. The other weighting factors have been made subject to optimisation. The results are significantly better in comparison to the standard model. In contrast to the very specific models, the perplexity difference to the base model is smaller (see table 4 ). The random usage results emphasise the high level of generalisation. Table 5 shows the optimised tolerance parameters. The according weighting factors are shown in table 6. As one can see, the action-level seems to be of less importance to the specialisation and is therefore rarely used.",
"cite_spans": [],
"ref_spans": [
{
"start": 496,
"end": 503,
"text": "table 4",
"ref_id": "TABREF5"
},
{
"start": 576,
"end": 583,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Action",
"sec_num": null
},
{
"text": "We have evaluated more action-specific models optimised under different restrictions. These results are summarized in figure 7. In order to verify that our method actually finds action-specific models which have better results than others trained during the optimisation process we have additionally evaluated non-optimal action-specific models with a lower perplexity. These models are selected by leaving different percentages (from 10 % up to 80 %) of the top rated models unconsidered during the opti-misation process. The figure shows that these models indeed create worse recognition results than the fully optimised ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Action",
"sec_num": null
},
{
"text": "We have demonstrated an approach to include visual context into speech recognition realised by means of action-specific language models, which are automatically trained and optimised. The action-specific utterance parts required for training are gained using an automatic associating method between actions and speech. The method only requires manual annotation on a level of low detail. The perplexity is used as optimisation criterion for the training parameter sets and a detailed analysis shows the adequacy of this approach. In order to ensure a certain level of generalisation the complete utterance level has to be always used. The optimisation under this restriction delivers the best results, which are significantly improved in comparison to speech processing with a standard bi-gram model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Outlook",
"sec_num": "5"
},
{
"text": "Although this approach is able to improve speech recognition, the pairing of speech and actions happens on a heuristic level. Further research has to show in how far this association delivers semantically correct results. In contrast to knowledgebased methods, our approach can easily be transferred to other domains due to the automated pairing and training process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Outlook",
"sec_num": "5"
},
{
"text": "Further applications of action-specific language models could make it possible that action hypotheses are extracted during speech recognition. In order to realise that, multiple models could be matched against each other during speech processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Outlook",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Maintaining knowledge about temporal intervals",
"authors": [
{
"first": "James",
"middle": [
"F"
],
"last": "Allen",
"suffix": ""
}
],
"year": 1983,
"venue": "Communications of the ACM",
"volume": "26",
"issue": "11",
"pages": "832--843",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James F. Allen. 1983. Maintaining knowledge about temporal intervals. Communications of the ACM, 26(11):832-843, November.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Developing HMM-based recognizers with ESMERALDA",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Fink",
"suffix": ""
}
],
"year": 1999,
"venue": "Lecture Notes in Artificial Intelligence",
"volume": "1692",
"issue": "",
"pages": "229--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. A. Fink. 1999. Developing HMM-based recogniz- ers with ESMERALDA. In V\u00e1clav Matou\u0161ek, Pavel Mautner, Jana Ocel\u00edkov\u00e1, and Petr Sojka, editors, Lecture Notes in Artificial Intelligence, volume 1692, pages 229-234, Berlin Heidelberg. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Developing a contextualized mulimodal corpus for human-robot interaction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "H\u00fcttenrauch",
"suffix": ""
},
{
"first": "E",
"middle": [
"A"
],
"last": "Topp",
"suffix": ""
},
{
"first": "K",
"middle": [
"S"
],
"last": "Eklundh",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of Int. Conf. on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Green, H. H\u00fcttenrauch, E. A. Topp, and K. S. Eklundh. 2006. Developing a contextualized mulimodal corpus for human-robot interaction. In Proc. of Int. Conf. on Language Resources and Evaluation (LREC), Genua.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A study of cross-validation and bootstrap for accuracy estimation and model selection",
"authors": [
{
"first": "Ron",
"middle": [],
"last": "Kohavi",
"suffix": ""
}
],
"year": 1995,
"venue": "International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1137--1145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ron Kohavi. 1995. A study of cross-validation and bootstrap for accuracy estimation and model selection. In International Joint Conference on Artificial Intelli- gence, pages 1137-1145.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An object-oriented approach using a top-down and bottom-up process for manipulative action recognition",
"authors": [
{
"first": "Zhe",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jannik",
"middle": [],
"last": "Fritsch",
"suffix": ""
},
{
"first": "Sven",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Sagerer",
"suffix": ""
}
],
"year": 2006,
"venue": "Lecture Notes in Computer Science",
"volume": "06",
"issue": "",
"pages": "212--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhe Li, Jannik Fritsch, Sven Wachsmuth, and Gerhard Sagerer. 2006. An object-oriented approach using a top-down and bottom-up process for manipulative ac- tion recognition. In DAGM06, volume 4174 of Lecture Notes in Computer Science, pages 212-221, Heidel- berg, Germany. Springer-Verlag.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BITT: A corpus for topic tracking evaluation on multimodal human-robotinteraction",
"authors": [
{
"first": "Jan",
"middle": [
"F"
],
"last": "Maas",
"suffix": ""
},
{
"first": "Britta",
"middle": [],
"last": "Wrede",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the international conference on Language and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan F. Maas and Britta Wrede. 2006. BITT: A corpus for topic tracking evaluation on multimodal human-robot- interaction. In Proceedings of the international con- ference on Language and Evaluation (LREC), Genoa, Italy.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hearing lips and seeing voices",
"authors": [
{
"first": "Harry",
"middle": [],
"last": "Mcgurk",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Macdonald",
"suffix": ""
}
],
"year": 1976,
"venue": "Nature",
"volume": "264",
"issue": "5588",
"pages": "746--748",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harry Mcgurk and John Macdonald. 1976. Hearing lips and seeing voices. Nature, 264(5588):746-748, Dezember.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generation of language models using the results of image analysis",
"authors": [
{
"first": "U",
"middle": [],
"last": "Naeve",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "G",
"middle": [
"A"
],
"last": "Fink",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Kummert",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sagerer",
"suffix": ""
}
],
"year": 1995,
"venue": "European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "1739--1742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "U. Naeve, G. Socher, G. A. Fink, F. Kummert, and G. Sagerer. 1995. Generation of language models us- ing the results of image analysis. In European Con- ference on Speech Communication and Technology, pages 1739-1742, Madrid.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Recent advances in the automatic recognition of audiovisual speech",
"authors": [
{
"first": "G",
"middle": [],
"last": "Potamianos",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Neti",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Gravier",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "A",
"middle": [
"W"
],
"last": "Senior",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the IEEE",
"volume": "91",
"issue": "9",
"pages": "1306--1326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Potamianos, C. Neti, G. Gravier, A. Garg, and A. W. Senior. 2003. Recent advances in the automatic recog- nition of audiovisual speech. Proceedings of the IEEE, 91(9):1306-1326.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Two decades of statistical language modeling: where do we go from here?",
"authors": [
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the IEEE",
"volume": "88",
"issue": "8",
"pages": "1270--1278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Rosenfeld. 2000. Two decades of statistical language modeling: where do we go from here? Proceedings of the IEEE, 88(8):1270-1278, Aug.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Towards situated speech understanding: visual context priming of language models",
"authors": [
{
"first": "Deb",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Niloy",
"middle": [],
"last": "Mukherjee",
"suffix": ""
}
],
"year": 2005,
"venue": "Computer Speech & Language",
"volume": "19",
"issue": "2",
"pages": "227--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deb Roy and Niloy Mukherjee. 2005. Towards sit- uated speech understanding: visual context priming of language models. Computer Speech & Language, 19(2):227-248, April.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Linguistically mediated visual search",
"authors": [
{
"first": "M",
"middle": [
"J"
],
"last": "Spivey",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Tyler",
"suffix": ""
},
{
"first": "K",
"middle": [
"M"
],
"last": "Eberhard",
"suffix": ""
},
{
"first": "M",
"middle": [
"K"
],
"last": "Tanenhaus",
"suffix": ""
}
],
"year": 2001,
"venue": "Psychological Science",
"volume": "12",
"issue": "4",
"pages": "282--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. J. Spivey, M. J. Tyler, K. M. Eberhard, and M.K. Tanenhaus. 2001. Linguistically mediated visual search. Psychological Science, 12(4):282-286, July.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bayesian Networks for Speech and Image Integration",
"authors": [
{
"first": "S",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sagerer",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of 18th National Conf. on Artificial Intelligence (AAAI-2002)",
"volume": "",
"issue": "",
"pages": "300--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Wachsmuth and G. Sagerer. 2002. Bayesian Networks for Speech and Image Integration. In Proc. of 18th National Conf. on Artificial Intelligence (AAAI-2002), pages 300-306, Edmonton, Alberta, Canada.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multimodal corpus collection for the design of user-programmable robots",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Wolf",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Bugmann",
"suffix": ""
}
],
"year": 2005,
"venue": "TAROS 2005 Towards Autonomous Robotic Systems Incorporating the Autumn Biro-Net Symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. C. Wolf and G. Bugmann. 2005. Multimodal corpus collection for the design of user-programmable robots. In TAROS 2005 Towards Autonomous Robotic Sys- tems Incorporating the Autumn Biro-Net Symposium, September.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Linking speech and gesture in multimodal instruction systems",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Wolf",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Bugmann",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE International Symposium on Robot and Human Interactive Communication",
"volume": "",
"issue": "",
"pages": "141--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. C. Wolf and G. Bugmann. 2006. Linking speech and gesture in multimodal instruction systems. In IEEE International Symposium on Robot and Human Inter- active Communication, pages 141-144, Hatfield, UK, September.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "A test subject describes a task while performing it.",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Hierarchic structure of actions used for annotation.",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Figure 3shows a schematic diagram of the model topology. For",
"uris": null,
"num": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Schematic diagram of a HMM topology with fixed word model order and optional \"<other>\" models.",
"uris": null,
"num": null
},
"FIGREF5": {
"type_str": "figure",
"text": "Augmented utterance transcription and action annotation on one time axis (t[s])",
"uris": null,
"num": null
},
"TABREF4": {
"text": "",
"html": null,
"content": "<table><tr><td colspan=\"4\">: Recognition results (expand strategy) using</td></tr><tr><td colspan=\"4\">optimised action-specific language models, trained</td></tr><tr><td colspan=\"4\">using the utterance level always once. Weighting</td></tr><tr><td colspan=\"4\">factors have been made subject to optimisation.</td></tr><tr><td>Action</td><td colspan=\"2\">Base Model</td><td>Diff</td></tr><tr><td/><td>perp.</td><td>perp.</td></tr><tr><td>take-cup</td><td colspan=\"2\">20,43 17,59</td><td>2,84</td></tr><tr><td>take-tea</td><td colspan=\"2\">26,59 25,15</td><td>1,44</td></tr><tr><td>take-sugar</td><td colspan=\"2\">23,36 18,98</td><td>4,38</td></tr><tr><td>take-milk</td><td colspan=\"2\">22,68 21,63</td><td>1,05</td></tr><tr><td>putdown-tea</td><td colspan=\"2\">26,36 20,57</td><td>5,80</td></tr><tr><td>putdown-cup</td><td colspan=\"2\">22,51 20,91</td><td>1,60</td></tr><tr><td colspan=\"3\">putdown-milk 30,46 21,95</td><td>8,51</td></tr><tr><td>pourin-tea</td><td colspan=\"2\">27,27 22,51</td><td>4,77</td></tr><tr><td>pourin-sugar</td><td colspan=\"2\">20,33 15,40</td><td>4,93</td></tr><tr><td>pourin-milk</td><td colspan=\"2\">31,34 25,46</td><td>5,88</td></tr><tr><td>pourin-water</td><td colspan=\"2\">29,53 24,62</td><td>4,91</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF5": {
"text": "",
"html": null,
"content": "<table><tr><td colspan=\"4\">: Comparison of the perplexity regarding the</td></tr><tr><td colspan=\"4\">action-specific models against the perplexity using</td></tr><tr><td colspan=\"4\">a standard bi-gram trained on the whole utterances.</td></tr><tr><td colspan=\"4\">The language models are trained using the utterance</td></tr><tr><td>level always once.</td><td/><td/><td/></tr><tr><td>Action</td><td colspan=\"2\">Tolerance [s]</td><td>Silence-</td></tr><tr><td/><td>left</td><td>right</td><td>penalty</td></tr><tr><td>take-cup</td><td>2.00</td><td>1.00</td><td>2.00</td></tr><tr><td>take-tea</td><td>3.00</td><td>3.00</td><td>0.00</td></tr><tr><td>take-sugar</td><td>0.00</td><td>3.00</td><td>1.00</td></tr><tr><td>take-milk</td><td>3.00</td><td>2.50</td><td>0.00</td></tr><tr><td>putdown-tea</td><td>2.50</td><td>0.00</td><td>0.50</td></tr><tr><td>putdown-cup</td><td>3.00</td><td>0.50</td><td>0.50</td></tr><tr><td>putdown-milk</td><td>0.50</td><td>0.00</td><td>1.00</td></tr><tr><td>pourin-tea</td><td>0.50</td><td>2.50</td><td>1.00</td></tr><tr><td>pourin-sugar</td><td>0.50</td><td>1.00</td><td>1.50</td></tr><tr><td>pourin-milk</td><td>0.00</td><td>2.00</td><td>0.00</td></tr><tr><td>pourin-water</td><td>2.50</td><td>1.50</td><td>0.00</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF6": {
"text": "Tolerance parameters found by the optimisation process (cp. table 4). The language models are trained using the utterance level always once.",
"html": null,
"content": "<table><tr><td>75</td><td/><td/><td/><td/><td/></tr><tr><td>70</td><td/><td/><td/><td/><td/></tr><tr><td>55 60 65</td><td>Word Accuracy</td><td/><td/><td>Base Result Transcription Perplexity stick expand random, expand</td><td/></tr><tr><td>50</td><td/><td/><td/><td>random, stick</td><td/></tr><tr><td/><td/><td>Perplexity</td><td/><td>non optimal</td><td/></tr><tr><td>45</td><td/><td/><td/><td/><td/></tr><tr><td>10</td><td>15</td><td>20</td><td>25</td><td>30</td><td>35</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF7": {
"text": "Weighting factors determined during parameter optimisation (cp. table 4).",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}