ACL-OCL / Base_JSON /prefixP /json /P99 /P99-1049.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P99-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:32:08.744712Z"
},
"title": "An Efficient Statistical Speech Act Type Tagging System for Speech Translation Systems",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Tanaka",
"suffix": "",
"affiliation": {
"laboratory": "ATR Interpreting Telecommunications Research Laboratories",
"institution": "",
"location": {
"addrLine": "2-2, Seika-cho, Soraku-gun",
"postCode": "619-0288",
"settlement": "Hikaridai, Kyoto",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Akio",
"middle": [],
"last": "Yokoo",
"suffix": "",
"affiliation": {
"laboratory": "ATR Interpreting Telecommunications Research Laboratories",
"institution": "",
"location": {
"addrLine": "2-2, Seika-cho, Soraku-gun",
"postCode": "619-0288",
"settlement": "Hikaridai, Kyoto",
"country": "Japan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a new efficient speech act type tagging system. This system covers the tasks of (1) segmenting a turn into the optimal number of speech act units (SA units), and (2) assigning a speech act type tag (SA tag) to each SA unit. Our method is based on a theoretically clear statistical model that integrates linguistic, acoustic and situational information. We report tagging experiments on Japanese and English dialogue corpora manually labeled with SA tags. We then discuss the performance difference between the two languages. We also report on some translation experiments on positive response expressions using SA tags.",
"pdf_parse": {
"paper_id": "P99-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a new efficient speech act type tagging system. This system covers the tasks of (1) segmenting a turn into the optimal number of speech act units (SA units), and (2) assigning a speech act type tag (SA tag) to each SA unit. Our method is based on a theoretically clear statistical model that integrates linguistic, acoustic and situational information. We report tagging experiments on Japanese and English dialogue corpora manually labeled with SA tags. We then discuss the performance difference between the two languages. We also report on some translation experiments on positive response expressions using SA tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper describes a statistical speech act type tagging system that utilizes linguistic, acoustic and situational features. This work can be viewed as a study on automatic \"Discourse Tagging\" whose objective is to assign tags to discourse units in texts or dialogues. Discourse tagging is studied mainly from two different viewpoints, i.e., linguistic and engineering viewpoints. The work described here belongs to the latter group. More specifically, we are interested in automatically recognizing the speech act types of utterances and in applying them to speech translation systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several studies on discourse tagging to date have been motivated by engineering applications. The early studies by Nagata and Morimoto (1994) and Reithinger and Maier (1995) showed the possibility of predicting dialogue act tags for next utterances with statistical methods. These studies, however, presupposed properly segmented utterances, which is not a realistic assumption. In contrast to this assumption, automatic utterance segmentation (or discourse segmentation) is desired here.",
"cite_spans": [
{
"start": 115,
"end": 141,
"text": "Nagata and Morimoto (1994)",
"ref_id": "BIBREF6"
},
{
"start": 146,
"end": 173,
"text": "Reithinger and Maier (1995)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Discourse segmentation in linguistics, whether manual or automatic, has also received keen atten-tion because such segmentation provides the foundation of higher discourse structures (Grosz and Sidnet, 1986) .",
"cite_spans": [
{
"start": 183,
"end": 207,
"text": "(Grosz and Sidnet, 1986)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Discourse segmentation has also received keen attention from the engineering side because the natural language processing systems that follow the speech recognition system are designed to accept linguistically meaningful units (Stolcke and Shriberg, 1996) . There has been a lot of research following this line such as (Stolcke and Shriberg, 1996) (Cettolo and Falavigna, 1998) , to only mention a few.",
"cite_spans": [
{
"start": 227,
"end": 255,
"text": "(Stolcke and Shriberg, 1996)",
"ref_id": "BIBREF13"
},
{
"start": 319,
"end": 347,
"text": "(Stolcke and Shriberg, 1996)",
"ref_id": "BIBREF13"
},
{
"start": 348,
"end": 377,
"text": "(Cettolo and Falavigna, 1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We can take advantage of these studies as a preprocess for tagging. In this paper, however, we propose a statistical tagging system that optimally performs segmentation and tagging at the same time. Previous studies like (Litman and Passonneau, 1995) have pointed out that the use of a multiple information source can contribute to better segmentation and tagging, and so our statistical model integrates linguistic, acoustic and situational information.",
"cite_spans": [
{
"start": 221,
"end": 250,
"text": "(Litman and Passonneau, 1995)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The problem can be formalized as a search problem on a word graph, which can be efficiently handled by an extended dynamic programming algorithm. Actually, we can efficiently find the optimal solution without limiting the search space at all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The results of our tagging experiments involving both Japanese and English corpora indicated a high performance for Japanese but a considerably lower performance for the English corpora. This work also reports on the use of speech act type tags for translating Japanese and English positive response expressions. Positive responses quite often appear in task-oriented dialogues like those in our tasks. They are often highly ambiguous and problematic in speech translation. We will show that these expressions can be effectively translated with the help of dialogue information, which we call speech act type tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Problems",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "In this section, we briefly explain our speech act type tags and the tagged data and then formally define the tagging problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "The data used in this study is a collection of transcribed dialogues on a travel arrangement task between Japanese and English speakers mediated by interpreters . The transcriptions were separated by language, i.e., English and Japanese, and the resultant two corpora share the same content. Both transcriptions went through morphological analysis, which was manually checked. The transcriptions have clear turn boundaries (TB's). Some of the Japanese and English dialogue files were manually segmented into speech act units (SA units) and assigned with speech act type tags (SA tags). The SA tags represent a speaker's intention in an utterance, and is more or less similar to the traditional illocutionary force type (Searle, 1969) .",
"cite_spans": [
{
"start": 525,
"end": 535,
"text": "(SA units)",
"ref_id": null
},
{
"start": 719,
"end": 733,
"text": "(Searle, 1969)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Tags",
"sec_num": "2.1"
},
{
"text": "The SA tags for the Japanese language were based on the set proposed by Seligman et al. (1994) and had 29 types. The English SA tags were based on the Japanese tags, but we redesigned and reduced the size to 17 types. We believed that an excessively detailed tag classification would decrease the intercoder reliability and so pruned some detailed tags) The following lines show an example of the English tagged dialogues. Two turns uttered by a hotel clerk and a customer were Segmented into SA units and assigned with SA tags. <clerk's turn> Hello, (expressive) New York City Hotel, (inform) may I help you ? (offer) <customer(interpreter)'s turn> Hello, (expressive) my name is Hiroko Tanaka (inform) and I would like to make a reservation for a room at your hotel. (desire) The tagging work to the dialogue was conducted by experts who studied the tagging manual beforehand. The manual described the tag definitions and turn segmentation strategies and gave examples. The work involved three experts for the Japanese corpus and two experts for the English corpus. 2",
"cite_spans": [
{
"start": 72,
"end": 94,
"text": "Seligman et al. (1994)",
"ref_id": "BIBREF12"
},
{
"start": 769,
"end": 777,
"text": "(desire)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Tags",
"sec_num": "2.1"
},
{
"text": "The result was checked and corrected by one expert for each language. Therefore, since the work was done by one expert, the inter-coder tagging instability was suppressed to a minimum. As the result of the tagging, we obtained 95 common dialogue files with SA tags for Japanese and English and used them in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Tags",
"sec_num": "2.1"
},
{
"text": "1Japanese tags, for example, had four tags mainly used for dialogue endings: thank, offer-follow-up, goodwishes, and farewell, most of which were reduced to expressive in English. 2They did not listen to the recorded sounds in either case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Tags",
"sec_num": "2.1"
},
{
"text": "Our tagging system assumes an input of a word sequence for a dialogue produced by a speech recognition system. The word sequence is accompanied with clear turn boundaries. Here, the words do not contain any punctuation marks. The word sequence can be viewed as a sequence of quadruples: (wi, li, ai, 8i) The task of speech act type tagging in this paper covers two tasks: (1) segmentation of a word sequence into the optimal number of SA units, and (2) assignment of an SA tag to each SA unit. Here, the input is a word sequence with clear TB's, and our tagger takes each turn as a process unit. 3",
"cite_spans": [
{
"start": 287,
"end": 303,
"text": "(wi, li, ai, 8i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2.2"
},
{
"text": "\"'\" (Wi-1, li-1, ai-1, si-1),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2.2"
},
{
"text": "In this paper, an SA unit is denoted as u and the sequence is denoted as U. An SA tag is denoted as e represents t and the sequence is denoted as T. x s a sequence of x starting from s to e. Therefore, represents a tag sequence from 1 to j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2.2"
},
{
"text": "The task is now formally addressed as follows: find the best SA unit sequence U and tag sequence T for each turn when a word sequence W with clear TB's is given. We will treat this problem with the statistical model described in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2.2"
},
{
"text": "The problem addressed in Section 2 can be formalized as a search problem in a word graph that holds all possible combinations of SA units in a turn. We take a probabilistie approach to this problem, which formalizes it as finding a path (U,T) in the word graph that maximizes the probability P(U, T I W).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Model",
"sec_num": "3"
},
{
"text": "3Although we do not explicitly represent TB's in a word sequence in the following discussions, one might assume virtual TB markers like @ in the word sequence. This is formally represented in equation 1. This probability is naturally decomposed into the product of two terms as in equation 3. The first probability in equation 3represents an arbitrary word sequence constituting one SA unit ui, given hj (the history of SA units and tags from the beginning of a dialogue, hj = uJ-l,t j-l) and input W. The second probability represents the current SA unit u i bearing a particular SA tag tj, given uj, hi, and W.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Model",
"sec_num": "3"
},
{
"text": "(U,T) = argmaxP(U,T I w), (1) U,T k P(uj,tj I hi, W), = argmax H (2) U,T j=l k _- argm x l] P(ui I hi, W) U,T j=l x P(tj I uj, hi, W).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Model",
"sec_num": "3"
},
{
"text": "( 3)We call the first term \"unit existence probability\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Model",
"sec_num": "3"
},
{
"text": "Ps and the second term \"tagging probability\" PT. Figure 1 shows a simplified image of the probability calculation in a word graph, where we have finished processing the word sequence of w~ -1 Now, we estimate the probability for the word sequence w~ +p-1 constituting an SA unit uj and having a particular SA tag tj. Because of the problem of sparse data, these probabilities are hard to directly estimate from the training corpus. We will use the following approximation techniques.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 57,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Statistical Model",
"sec_num": "3"
},
{
"text": "The probability of unit existence PE is actually equivalent to the probability that the word sequence w~,..., w,+p-1 exists as one SA unit given h i and W ( Fig. 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 163,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "We then approximate PE by PE ~--P (B~,_I,w, = l l hj, W) xP(B~.+,,_,,w,.,,",
"cite_spans": [
{
"start": 34,
"end": 56,
"text": "(B~,_I,w, = l l hj, W)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= 1 I hi, W) s+p--2 x H P(Bw,-,~+I = 0 I hi,W),",
"eq_num": "(4)"
}
],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "ITl:$",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "where the random variable Bw=,,~=+l takes the binary values 1 and 0. A value of 1 corresponds to the existence of an SA unit boundary between wx and w=+l, and a value of 0 to the non-existence of an SA unit boundary. PE is approximated by the product of two types of probabilities: for a word sequence break at both ends of an SA unit and for a nonbreak inside the unit. Notice that the probabilities of the former type adjust an unfairly high probability estimation for an SA unit that is made from a short word sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "The estimation of PE is now reduced to that of P (Bw=,w~+l I hi, W) . This probability is estimated by a probabilistic decision tree and we have",
"cite_spans": [
{
"start": 49,
"end": 67,
"text": "(Bw=,w~+l I hi, W)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "P(Bw=,Wx+, I hi, W) ~-P(Bw .... +1 I eE(hj, W)),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "where riPE is a decision tree that categorizes hj, W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "into equivalent classes (Jelinek, 1997) . We modified C4.5 (Quinlan, 1993) style algorithm to produce probability and used it for this purpose. The decision tree is known to be effective for the data sparseness problem and can take different types of parameters such as discrete and continuous values, which is useful since our word sequence contains both types of features.",
"cite_spans": [
{
"start": 24,
"end": 39,
"text": "(Jelinek, 1997)",
"ref_id": "BIBREF3"
},
{
"start": 59,
"end": 74,
"text": "(Quinlan, 1993)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "Through preliminary experiments, we found that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "hj (the past history of tagging results) was not useful and discarded it. We also found that the probability was well estimated by the information available in a short range of r around w=, which is stored in W. Actually, the attributes used to develop the tree were",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "at~X-]-7* in W' = ~-r+l\" *+r \u2022 surface wordforms for ~=-~+1, z+r and the pause duration parts of speech for wx_r+l, between wx and w=+l. The word range r was set from 1 to 3 as we will report in sub-section 5.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "As a result, we obtained the final form of PE as PE ~--P(Bw .... ~, = 1 [~s(W'))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "x P (B~,+p_,,~,+p ",
"cite_spans": [
{
"start": 4,
"end": 17,
"text": "(B~,+p_,,~,+p",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "= 1 [ ~s(W')) s+p-2 \u00d7 H P(S~,,.w~,+ 1 = 01~E(W'))(5) m:$ 3.2 Tagging Probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "The tagging probability PT was estimated by the following formula utilizing a decision tree eT-Two functions named f and g were also utilized to extract information from the word sequence in uj. PT ~--P(tj J ff2T(f(uj),g(uj),tj_l,...,tj_m))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "As this formula indicates, we only used information available with the uj and m histories of SA tags in hi. The function f(uj) outputs the speaker's identification of uj. The function g(uj) extracts cue words for the SA tags from uj using a cue word list. The cue word list was extracted from a training corpus that was manually labeled with the SA tags. For each SA tag, the 10 most dependent words were extracted with a x2-test. After converting these into canonical forms, they were conjoined. To develop a statistical decision tree, we used an input table whose attributes consisted of a cue word list, a speaker's identification, and m previous tags. The value for each cue word was a binary value, where 1 was set when the utterance uj contained the word, or otherwise 0. The effect of f(uj), g(uj), and length m for the tagging performance will be reported in sub-section 5.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unit Existence Probability",
"sec_num": "3.1"
},
{
"text": "A search in a word graph was conducted using the extended dynamic programming technique proposed",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Method",
"sec_num": "4"
},
{
"text": "hj by Nagata (1994) . This algorithm was originally developed for a statistical Japanese morphological analyzer whose tasks are to determine boundaries in an input character sequence having no separators and to give an appropriate part of speech tag to each word, i.e., a character sequence unit. This algorithm can handle arbitrary lengths of histories of pos tags and words and efficiently produce n-best results.",
"cite_spans": [
{
"start": 6,
"end": 19,
"text": "Nagata (1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Search Method",
"sec_num": "4"
},
{
"text": "We can see a high similarity between our task and Japanese morphological analysis. Our task requires the segmentation of a word sequence instead of a character sequence and the assignment of an SA tag instead of a pos tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Method",
"sec_num": "4"
},
{
"text": "The main difference is that a word dictionary is available with a morphological analyzer. Thanks to its dictionary, a morphological analyzer can assume possible morpheme boundaries. 4 Our tagger, on the other hand, has to assume that any word sequence in a turn can constitute an SA unit in the search. This difference, however, does not require any essential change in the search algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Method",
"sec_num": "4"
},
{
"text": "Tagging Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "We have conducted several tagging experiments on both the Japanese and English corpora described in sub-section 2.1. Table 1 shows a summary of the 95 files used in the experiments. In the experiments described below, we used morpheme sequences for input instead of word sequences and showed the corresponding counts. The average number of SA units per turn was 2.68 for Japanese and 2.31 for English. The average number of boundary candidates per turn was 18 for Japanese and 12.7 for English. The number of tag types, the average number of SA units, and the average number of SA boundary candidates indicated that the Japanese data were more difficult to process. 4Als0, the probability for the existence of a word can be directly estimated from the corpus. ",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Profile",
"sec_num": "5.1"
},
{
"text": "We used \"labeled bracket matching\" for evaluation (Nagata, 1994) . The result of tagging can be viewed as a set of labeled brackets, where brackets correspond to turn segmentation and their labels correspond to SA tags. With this in mind, the evaluation was done in the following way. We counted the number of brackets in the correct answer, denoted as R (reference). We also counted the number of brackets in the tagger's output, denoted as S (system). Then the number of matching brackets was counted and denoted as M (match). Thus, we could define the precision rate with M/S and the recall rate with M/R.",
"cite_spans": [
{
"start": 50,
"end": 64,
"text": "(Nagata, 1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methods",
"sec_num": "5.2"
},
{
"text": "The matching was judged in two ways. One was \"segmentation match\": the positions of both starting and ending brackets (boundaries) were equal. The other was \"segmentation+tagging match\": the tags of both brackets were equal in addition to the segmentation match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methods",
"sec_num": "5.2"
},
{
"text": "The proposed evaluation simultaneously confirmed both the starting and ending positions of an SA unit and was more severe than methods that only evaluate one side of the boundary of an SA unit. Notice that the precision and recall for the segmen-tation+tagging match is bounded by those of the segmentation match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methods",
"sec_num": "5.2"
},
{
"text": "The total tagging performance is affected by the two probability terms PE and PT, both of which contain the parameters in Table 2 . To find the best param- The effect of the parameters in PE was measured by the segmentation match.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Tagging Results",
"sec_num": "5.3"
},
{
"text": "II Change the parameters for PT with fixed parameters for PE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging Results",
"sec_num": "5.3"
},
{
"text": "The effect of the parameters in PT was measured by the segmentation+tagging match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging Results",
"sec_num": "5.3"
},
{
"text": "Now, we report the details with the Japanese set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging Results",
"sec_num": "5.3"
},
{
"text": "We fixed the parameters for PT as f(uj), g(uj),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of DE with Japanese Data",
"sec_num": "5.3.1"
},
{
"text": "tj-1, i.e., a speaker's identification, cue words in the current SA unit, and the SA tag of the previous SA unit. The unit existence probability was estimated using the following parameters. Under the above conditions, we conducted 10-fold cross-validation tests and measured the average recall and precision rates in the segmentation match, which are listed in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 362,
"end": 369,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Effects of DE with Japanese Data",
"sec_num": "5.3.1"
},
{
"text": "We then conducted l-tests among these average scores. Table 4 shows the l-scores between different parameter conditions. In the following discussions, we will use the following l-scores: t~=0.0~5(18) --2.10 and t~=0.05(18) = 1.73.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Effects of DE with Japanese Data",
"sec_num": "5.3.1"
},
{
"text": "We can note the following features from Tables 3 and 4. \u2022 recall rate (B), (C), and (D) showed statistically significant (two-sided significance level of 5%, i.e., t > 2.10) improvement from (A). (D) did not show significant improvement from either (B) nor (C).",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 56,
"text": "Tables 3 and 4.",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Effects of DE with Japanese Data",
"sec_num": "5.3.1"
},
{
"text": "\u2022 precision rate Although (n) and (C) did not improve from (A) with a high statistical significance, we can observe the tendency of improvement. (D) did not show a significant difference from (B) or (C).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of DE with Japanese Data",
"sec_num": "5.3.1"
},
{
"text": "We can, therefore, say that (B) and (C) showed equally significant improvement from (A): expansion of the word range r from I to 2 and using pause information with word range 1. The combination of word range 2 and pause (D), however, did not show any significant differences from (B) or (C). We believe that the combination resulted in data sparseness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of DE with Japanese Data",
"sec_num": "5.3.1"
},
{
"text": "For the Type II experiments, we set the parameters for PE as condition (C): surface wordforms and pos's of wx TM and a pause duration between w~ and w~+l. Then, PT was estimated using the following parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of PT with Japanese Data",
"sec_num": "5.3.2"
},
{
"text": "(E): Cue words in utterance uj, i.e., g(uj) (F): (S) with tj_ 1 (G): (E) with tj_l and tj_2 (H): (E) with tj-1 and a speaker's identification f(uj) The recall and precision rates for the segmenta-tion\u00f7tagging match were evaluated in the same way as in the previous experiments. The results are shown in Table 5 . The l-scores among these parameter setting are shown in Table 6 . We can observe the following features.",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 310,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 369,
"end": 376,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Effects of PT with Japanese Data",
"sec_num": "5.3.2"
},
{
"text": "\u2022 recall rate (F) and (G) showed an improvement from (E) with a two-sided significance level of 10% (1 > ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of PT with Japanese Data",
"sec_num": "5.3.2"
},
{
"text": "Same as recall rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 precision rate",
"sec_num": null
},
{
"text": "Here, we can say that tj-1 together with the cue words (F) played the dominant role in the SA tag assignment, and the further addition of history tj-2 (G) or the speaker's identification f(uj) (H) did not result in significant improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 precision rate",
"sec_num": null
},
{
"text": "Experiments As a concise summary, the best recall and precision rates for the segmentation match were obtained with conditions (n) and (C): approximately 92% and 93%, respectively. The best recall and precision rates for the segmentation+tagging match were 74.91% and 75.35 %, respectively (Table 5 (F)). We consider these figures quite satisfactory considering the severeness of our evaluation scheme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Japanese Tagging",
"sec_num": "5.3.3"
},
{
"text": "We will briefly discuss the experiments with English data. The English corpus experiments were similar to the Japanese ones. For the SA unit segmentation, we changed the word range r from 1 to 3 while fixing the parameters for PT to (H), where we obtained the best results with word range r ---2, i.e., (B). The recall rate was 71.92% and the precision rate was 78.10%. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Tagging Experiment",
"sec_num": "5.3.4"
},
{
"text": "We conducted the exact same tagging experiments as the Japanese ones by fixing the parameter for PE to (B). Experiments with condition (H) showed the best score: the recall rate was 53.17% and the precision rate was 57.75%. We obtained lower performance than that for Japanese. This was somewhat surprising since we thought English would be easier to process. The lower performance in segmentation affected the total tagging performance. We will further discuss the difference in section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Tagging Experiment",
"sec_num": "5.3.4"
},
{
"text": "Application of SA tags to speech translation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "In this section, we will briefly discuss an application of SA tags to a machine translation task. This is one ~Experiments with pause information were not conducted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "of the motivations of the automatic tagging research described in the previous sections. We actually dealt with the translation problem of positive responses appearing in both Japanese and English dialogues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "Japanese positive responses like Hat and Soudesuka, and the English ones like Yes and I see appear quite often in our corpus. Since our dialogues were collected from the travel arrangement domain, which can basically be viewed as a sequence of a pair of questions and answers, they naturally contain many of these expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "These expressions are highly ambiguous in wordsense. For example, Hai can mean Yes (accept), Uh huh (acknowledgment), hello (greeting) and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "Incorrect translation of the expression could confuse the dialogue participants. These expressions, however, are short and do not contain enough clues for proper translation in themselves, so some other contextual information is inevitably required. We assume that SA tags can provide such necessary information since we can distinguish the translations by the SA tags in the parentheses in the above examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "We conducted a series of experiments to verify if positive responses can be properly translated using SA tags with other situational information. We assumed that SA tags are properly given to these expressions and used the manually tagged corpus described in Table 1 for the experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 266,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "We collected Japanese positive responses from the SA units in the corpus. After assigning an English translation to each expression, we categorized these expressions into several representative forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "For example, the surface Japanese expression Ee, Kekkou desu was categorized under the representative form Kekkou.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "We also made such data for English positive responses. The size of the Japanese and English data in representative forms (equivalent to SA unit) is shown in Table 7 . Notice that 1,968 out of 5,416 Japanese SA units are positive responses and 1,037 out of 4,675 English SA units are positive responses. The Japanese data contained 16 types of English translations and the English data contained 12 types of Japanese translations in total.",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 164,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "We examined the effects of all possible combinations of the following four features on translation accuracy. We trained decision trees with the C4.5 (Quinlan, 1993) type algorithm while using these features (in all possible combinations) as attributes. We will show some of the results. Table 8 shows the accuracy when using one feature as the attribute. We can naturally assume that the use of feature (I) gives the baseline accuracy.",
"cite_spans": [
{
"start": 149,
"end": 164,
"text": "(Quinlan, 1993)",
"ref_id": "BIBREF9"
},
{
"start": 403,
"end": 406,
"text": "(I)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 287,
"end": 294,
"text": "Table 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "The result gives us a strange impression in that the SA tags for the previous SA units (K) were far more effective than the SA tags for the positive responses themselves (J). This phenomenon can be explained by the variety of tag types given to the utterances. A positive response expressions of the same representative form have at most a few SA tag types, say two, whereas the previous SA units can have many SA tag types. If a positive response expression possesses five translations, they cannot be translated with two SA tags. Table 9 shows the best feature combinations at each number of features from 1 to 4. The best feature combinations were exactly the same for both translation directions, Japanese to English and vice versa. The percentages are the average accuracy obtained by the 10-fold cross-validation, and the tscore in each row indicates the effect of adding one feature from the upper row. We again admit a tscore that is greater than 2.01 as significant (twosided significance level of 5 %).",
"cite_spans": [],
"ref_spans": [
{
"start": 532,
"end": 539,
"text": "Table 9",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "The accuracy for Japanese translation was saturated with the two features (K) and (I). Further addition of any feature did not show any significant improvement. The SA tag for the positive responses did not work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "The accuracy for English translation was satu- , and (L). The speaker's identification proved to be effective, unlike Japanese. This is due to the necessity of controlling politeness in Japanese translations according to the speaker. The SA tag for the positive responses did not work either. These results suggest that the SA tag information for the previous SA unit and the speaker's information should be kept in addition to representative forms when we implement the positive response translation system together with the SA tagging system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "Related Works and Discussions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "We discuss the tagging work in this section. In subsection 5.3, we showed that Japanese segmentation into SA units was quite successful only with lexical information, but English segmentation was not that successful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "Although we do not know of any experiments directly comparable to ours, a recent work reported by Cettolo and Falavigna (1998) seems to be similar. In that paper, they worked on finding semantic boundaries in Italian dialogues with the \"appointment scheduling task.\"",
"cite_spans": [
{
"start": 98,
"end": 126,
"text": "Cettolo and Falavigna (1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "Their semantic boundary nearly corresponds to our SA unit boundary. Cettolo and Falavigna (1998) reported recall and precision rates of 62.8% and 71.8%, respectively, which were obtained with insertion and deletion of boundary markers. These scores are clearly lower than our results with a Japanese segmentation match.",
"cite_spans": [
{
"start": 68,
"end": 96,
"text": "Cettolo and Falavigna (1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "Although we should not jump to a generalization, we are tempted to say the Japanese dialogues are easier to segment than western languages. With this in mind, we would like to discuss our study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "First of all, was the manual segmentation quality the same for both corpora? As we explained in subsection 2.1, both corpora were tagged by experts, and the entire result was checked by one of them for each language. Therefore, we believe that there was not such a significant gap in quality that could explain the segmentation performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "Secondly, which lexical information yielded such a performance gap? We investigated the effects of part-of-speech and morphemes in the segmentation of both languages. We conducted the same 10-fold cross-validation tests as in sub-section 5.3 and obtained 82.29% (recall) and 86.16% (precision) for Japanese under condition (B'), which used only pos's in \" x+~ for the PE calculation. English, in con-Wx-1 trast, marked rates of 65.63% (recall) and 73.35% (precision) under the same condition. These results indicated the outstanding effectiveness of Japanese pos's in segmentation. Actually, we could see some pos's such as \"ending particle (shu-jyoshi)\" which clearly indicate sentence endings and we considered that they played important roles in the segmentation. English, on the other hand, did not seem to have such strong segment indicating pos's. Although lexical information is important in English segmentation (Stoleke and Shriberg, 1996) , what other information can help improve such segmentation? Hirschberg and Nakatani (1996) showed that prosodic information helps human discourse segmentation. Litman and Passonneau (1995) addressed the usefulness of a \"multiple knowledge source\" in human and automatic discourse segmentation. Vendittiand Swerts (1996) stated that the intonational features for many Indo-European languages help cue the structure of spoken discourse. Cettolo and Falavigna (1998) reported improvements in Italian semantic boundary detection with acoustic information. All of these works indicate that the use of acoustic or prosodic information is useful, so this is surely one of our future directions.",
"cite_spans": [
{
"start": 920,
"end": 948,
"text": "(Stoleke and Shriberg, 1996)",
"ref_id": null
},
{
"start": 1010,
"end": 1040,
"text": "Hirschberg and Nakatani (1996)",
"ref_id": "BIBREF2"
},
{
"start": 1110,
"end": 1138,
"text": "Litman and Passonneau (1995)",
"ref_id": "BIBREF4"
},
{
"start": 1256,
"end": 1269,
"text": "Swerts (1996)",
"ref_id": "BIBREF14"
},
{
"start": 1385,
"end": 1413,
"text": "Cettolo and Falavigna (1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "The use of higher syntacticM information is also one of our directions. The SA unit should be a meaningful syntactic unit, although its degree of meaningfulness may be less than that in written texts. The goodness of this aspect can be easily incorporated in our probability term PE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "We have described a new efficient statistical speech act type tagging system based on a statistical model used in Japanese morphological analyzers. This system integrates linguistic, acoustic, and situational features and efficiently performs optimal segmentation of a turn and tagging. From several tagging experiments, we showed that the system segmented turns and assigned speech act type tags at high accuracy rates when using Japanese data. Comparatively lower performance was obtained using English data, and we discussed the performance difference. We Mso examined the effect of parameters in the statistical models on tagging performance. We finally showed that the SA tags in this paper are useful in translating positive responses that often appear in task-oriented dialogues such as those in ours.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
}
],
"back_matter": [
{
"text": "The authors would like to thank Mr. Yasuo Tanida for the excellent programming works and Dr. Seiichi Yamamoto for stimulus discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic detection of semantic boundaries based on acoustic and lexical knowledge",
"authors": [
{
"first": "M",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Falavigna",
"suffix": ""
}
],
"year": 1998,
"venue": "ICSLP '98",
"volume": "4",
"issue": "",
"pages": "1551--1554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Cettolo and D. Falavigna. 1998. Automatic de- tection of semantic boundaries based on acoustic and lexical knowledge. In ICSLP '98, volume 4, pages 1551-1554.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Attention, intentions and the structure of discourse",
"authors": [
{
"first": "B",
"middle": [
"J"
],
"last": "Grosz",
"suffix": ""
},
{
"first": "C",
"middle": [
"L"
],
"last": "Sidner",
"suffix": ""
}
],
"year": 1986,
"venue": "Computational Linguistics",
"volume": "12",
"issue": "3",
"pages": "175--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. J. Grosz and C. L. Sidner. 1986. Atten- tion, intentions and the structure of discourse. Computational Linguistics, 12(3):175-204, July- September.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A prosodic analysis of discourse segments in direction-giving monologues",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Nakatani",
"suffix": ""
}
],
"year": 1996,
"venue": "34th Annual Meeting of the Association for the Computational Linguistics",
"volume": "",
"issue": "",
"pages": "286--293",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Hirschberg and C. H. Nakatani. 1996. A prosodic analysis of discourse segments in direction-giving monologues. In 34th Annual Meeting of the Asso- ciation for the Computational Linguistics, pages 286-293.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Statistical Methods for Speech Recognition, chapter 10",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Jelinek, 1997. Statistical Methods for Speech Recognition, chapter 10. The MIT Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Combining multiple knowledge sourses for discourse segmentation",
"authors": [
{
"first": "D",
"middle": [
"J"
],
"last": "Litman",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Passonneau",
"suffix": ""
}
],
"year": 1995,
"venue": "33rd Annual Meeting of the Association for the Computational Linguistics",
"volume": "",
"issue": "",
"pages": "108--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. J. Litman and R. J. Passonneau. 1995. Com- bining multiple knowledge sourses for discourse segmentation. In 33rd Annual Meeting of the As- sociation for the Computational Linguistics, pages 108-115.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A speech and language database for speech translation research",
"authors": [
{
"first": "T",
"middle": [],
"last": "Morimoto",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Uratani",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Furuse",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sobashima",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sagisaka",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Higuchi",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yamazaki",
"suffix": ""
}
],
"year": 1994,
"venue": "ICSLP '94",
"volume": "",
"issue": "",
"pages": "1791--1794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Morimoto, N. Uratani, T. Takezawa, O. Furuse, Y. Sobashima, H. Iida, A. Nakamura, Y. Sagisaka, N. Higuchi, and Y. Yamazaki. 1994. A speech and language database for speech translation research. In ICSLP '94, pages 1791-1794.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An informationtheoretic model of discourse for next utterance type prediction",
"authors": [
{
"first": "M",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Morimoto",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "35",
"issue": "",
"pages": "1050--1061",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Nagata and T. Morimoto. 1994. An information- theoretic model of discourse for next utterance type prediction. Transactions of Information Processing Society of Japan, 35(6):1050-1061.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A stochastic Japanese morphological analyzer using a forward-DP and backward",
"authors": [
{
"first": "M",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Nagata. 1994. A stochastic Japanese morpholog- ical analyzer using a forward-DP and backward-",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A* N-best search algorithm",
"authors": [],
"year": null,
"venue": "Proceedings of Coling94",
"volume": "",
"issue": "",
"pages": "201--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A* N-best search algorithm. In Proceedings of Coling94, pages 201-207.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "C~.5: Programs for Machine Learning",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Quinlan. 1993. C~.5: Programs for Machine Learning. Morgan Kaufmann.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Utilizing statistical dialogue act processing in verbmobil",
"authors": [
{
"first": "N",
"middle": [],
"last": "Reithinger",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Maier",
"suffix": ""
}
],
"year": 1995,
"venue": "33rd Annual Meeting of the Associations for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "116--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Reithinger and E. Maier. 1995. Utilizing statisti- cal dialogue act processing in verbmobil. In 33rd Annual Meeting of the Associations for Computa- tional Linguistics, pages 116-121.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Speech Acts",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Searle",
"suffix": ""
}
],
"year": 1969,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Searle. 1969. Speech Acts. Cambridge Univer- sity Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A bilingual set of communicative act labels for spontaneous dialogues",
"authors": [
{
"first": "M",
"middle": [],
"last": "Seligman",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Fais",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Tomokiyo",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Seligman, L. Fais, and M. Tomokiyo. 1994. A bilingual set of communicative act labels for spontaneous dialogues. Technical Report TR-IT- 0081, ATR-ITL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic linguistic segmentation of conversational speech",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 1996,
"venue": "ICSLP '96",
"volume": "2",
"issue": "",
"pages": "1005--1008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke and E. Shriberg. 1996. Automatic lin- guistic segmentation of conversational speech. In ICSLP '96, volume 2, pages 1005-1008.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Intonational cues to discourse structure in Japanese",
"authors": [
{
"first": "J",
"middle": [],
"last": "Venditti",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Swerts",
"suffix": ""
}
],
"year": 1996,
"venue": "ICSLP '96",
"volume": "2",
"issue": "",
"pages": "725--728",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Venditti and M. Swerts. 1996. Intonational cues to discourse structure in Japanese. In ICSLP '96, volume 2, pages 725-728.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Probability calculation."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "A): Surface wordforms and pos's ofw~ +1, i.e., word range r = 1 (B): Surface wordforms and pos's of w x+2 i.e., word x-i, range r ----2 (C): (h) with a pause duration between wx, Wx+l (D): (U) with a pause duration between wx, wx+l"
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "I) Representative form of the positive response (J) SA tag for the positive response (K) SA tag for the SA unit previous to the positive response (L) Speaker (Hotel/Clerk)"
},
"TABREF0": {
"num": null,
"text": "...",
"html": null,
"type_str": "table",
"content": "<table><tr><td>where wi represents a surface wordform, and each</td></tr><tr><td>vector represents the following additional informa-</td></tr><tr><td>tion for wi.</td></tr><tr><td>li: canonical form and part of speech of</td></tr><tr><td>wi (linguistic feature)</td></tr><tr><td>ai: pause duration measured milliseconds</td></tr><tr><td>after wi (acoustic feature)</td></tr><tr><td>si: speaker's identification for wi such as</td></tr><tr><td>clerk or customer (situational feature)</td></tr><tr><td>Therefore, an utterance like Hello I am John Phillips</td></tr><tr><td>and ... uttered by a cuslomer is viewed as a sequence</td></tr><tr><td>like</td></tr><tr><td>(Hello, (hello, INTER), 100, customer),</td></tr><tr><td>(I,(i, PRON),0, customer)), (am, (be,</td></tr><tr><td>BE), 0, customer) ....</td></tr><tr><td>From here, we will denote a word sequence as W =</td></tr><tr><td>wl, w2, .. \u2022 wi, .. \u2022, Wn for simplicity. However, note</td></tr><tr><td>that W is a sequence of quadruples as described</td></tr><tr><td>above.</td></tr></table>"
},
"TABREF1": {
"num": null,
"text": "Counts in both corpora.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Counts</td><td>Japanese</td><td>English</td></tr><tr><td>Turn</td><td>2,020</td><td>2,020</td></tr><tr><td>SA unit</td><td>5,416</td><td>4,675</td></tr><tr><td>Morpheme</td><td>38,418</td><td>27,639</td></tr><tr><td>POS types</td><td>30</td><td>33</td></tr><tr><td>SA tag type</td><td>29</td><td>17</td></tr></table>"
},
"TABREF2": {
"num": null,
"text": "Parameters in probability terms. PE PT cue words in uj tj-1 ... tj_,~ : previous SA tags",
"html": null,
"type_str": "table",
"content": "<table><tr><td>x+r Wx-r+l</td><td>f(uj): speaker of uj</td></tr><tr><td>r: word range</td><td>g(uj):</td></tr></table>"
},
"TABREF3": {
"num": null,
"text": "T-scores for segmentation accuracies.",
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Recall</td><td/><td colspan=\"2\">Precision</td><td/></tr><tr><td>A</td><td>B</td><td>C</td><td>A</td><td>B</td><td>C</td></tr><tr><td>B 2.84</td><td>-</td><td>-</td><td>B 1.25</td><td>-</td><td>-</td></tr><tr><td colspan=\"2\">C 2.71 0.12</td><td>-</td><td colspan=\"2\">C 0.83 0.44</td><td>-</td></tr><tr><td colspan=\"6\">D 2.57 0.28 0.17 D 0.74 0.39 0.01</td></tr></table>"
},
"TABREF4": {
"num": null,
"text": "Average accuracy for segmentation match.",
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Parameter Recall rate % Precision rate %</td></tr><tr><td>A</td><td>89.50</td><td>91.99</td></tr><tr><td>B</td><td>91.89</td><td>92.92</td></tr><tr><td>C</td><td>92.00</td><td>92.57</td></tr><tr><td>D</td><td>92.20</td><td>92.58</td></tr></table>"
},
"TABREF5": {
"num": null,
"text": "Average accuracy for seg.+tag, match.",
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Parameter Recall rate % Precision rate %</td></tr><tr><td>E</td><td>72.25</td><td>72.70</td></tr><tr><td>F</td><td>74.91</td><td>75.35</td></tr><tr><td>G</td><td>74.83</td><td>75.29</td></tr><tr><td>H</td><td>74.50</td><td>74.96</td></tr></table>"
},
"TABREF6": {
"num": null,
"text": "T-scores for seg.+tag, accuracies.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Recall</td><td/><td/><td colspan=\"2\">Precision</td><td/></tr><tr><td>E</td><td>F</td><td>G</td><td>E</td><td>F</td><td>G</td></tr><tr><td>F 1.87</td><td>-</td><td>-</td><td>F 1.97</td><td>-</td><td>-</td></tr><tr><td colspan=\"2\">G 1.78 0.05</td><td colspan=\"3\">-G 1.90 0.04</td><td>-</td></tr><tr><td colspan=\"6\">H 1.50 0.26 0.21 H 1.60 0.28 0.24</td></tr><tr><td colspan=\"6\">1.73). However, (G) and (H) did not show sig-</td></tr><tr><td colspan=\"4\">nificant improvements from (F).</td><td/><td/></tr></table>"
},
"TABREF7": {
"num": null,
"text": "Representation forms and the counts.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Japanese</td><td>freq.</td><td>English</td><td>freq.</td></tr><tr><td>Kekkou</td><td>69</td><td>I understand</td><td>6</td></tr><tr><td>Soudesu ka</td><td>192</td><td>Great</td><td>5</td></tr><tr><td>Hal</td><td>930</td><td>Okay</td><td>240</td></tr><tr><td>Soudesu</td><td>120</td><td>I see</td><td>136</td></tr><tr><td>Moehiron</td><td>7</td><td>All right</td><td>136</td></tr><tr><td>Soudesu ne</td><td>16</td><td>Very well</td><td>13</td></tr><tr><td>Shouchi</td><td>30</td><td>Certainly</td><td>27</td></tr><tr><td>Wakari-</td><td/><td>Yes</td><td>359</td></tr><tr><td>mashita</td><td>304</td><td>Fine</td><td>52</td></tr><tr><td>Kashikomari-</td><td/><td>Right</td><td>10</td></tr><tr><td>mashita</td><td>300</td><td>Sure</td><td>44</td></tr><tr><td/><td/><td>Very good</td><td>9</td></tr><tr><td>Total</td><td colspan=\"2\">1,968 Total</td><td>1,037</td></tr></table>"
},
"TABREF8": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">: Accuracies with one feature.</td></tr><tr><td colspan=\"2\">Feature J toE(%)</td><td>EtoJ (%)</td></tr><tr><td>I</td><td>54.83</td><td>46.96</td></tr><tr><td>J</td><td>51.73</td><td>34.33</td></tr><tr><td>K</td><td>73.02</td><td>55.35</td></tr><tr><td>L</td><td>40.09</td><td>37.80</td></tr></table>"
},
"TABREF9": {
"num": null,
"text": "Best performance for each number of features.",
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Features J toE(%)</td><td colspan=\"2\">t EtoJ (%)</td><td>t</td></tr><tr><td>K</td><td>73.02</td><td>-</td><td>55.35</td><td>-</td></tr><tr><td>K,I</td><td>88.51</td><td>15.42</td><td>60.66</td><td>3.10</td></tr><tr><td>K,I,L</td><td>88.92</td><td>0.51</td><td>65.58</td><td>2.49</td></tr><tr><td>K,I,L,J</td><td>88.21</td><td>0.75</td><td>66.74</td><td>0.55</td></tr><tr><td colspan=\"4\">rated with the three features (K), (I)</td><td/></tr></table>"
}
}
}
}