ACL-OCL / Base_JSON /prefixW /json /W03 /W03-0311.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W03-0311",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:12:33.231550Z"
},
"title": "Retrieving Meaning-equivalent Sentences for Example-based Rough Translation",
"authors": [
{
"first": "Mitsuo",
"middle": [],
"last": "Shimohata",
"suffix": "",
"affiliation": {
"laboratory": "ATR Spoken Language Translation Research Laboratories",
"institution": "",
"location": {}
},
"email": "mitsuo.shimohata@atr.co.jp"
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": "",
"affiliation": {
"laboratory": "ATR Spoken Language Translation Research Laboratories",
"institution": "",
"location": {}
},
"email": "eiichiro.sumita@atr.co.jp"
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Example-based machine translation (EBMT) is a promising translation method for speechto-speech translation because of its robustness. It retrieves example sentences similar to the input and adjusts their translations to obtain the output. However, it has problems in that the performance degrades when input sentences are long and when the style of inputs and that of the example corpus are different. This paper proposes a method for retrieving \"meaning-equivalent sentences\" to overcome these two problems. A meaning-equivalent sentence shares the main meaning with an input despite lacking some unimportant information. The translations of meaning-equivalent sentences correspond to \"rough translations.\" The retrieval is based on content words, modality, and tense.",
"pdf_parse": {
"paper_id": "W03-0311",
"_pdf_hash": "",
"abstract": [
{
"text": "Example-based machine translation (EBMT) is a promising translation method for speechto-speech translation because of its robustness. It retrieves example sentences similar to the input and adjusts their translations to obtain the output. However, it has problems in that the performance degrades when input sentences are long and when the style of inputs and that of the example corpus are different. This paper proposes a method for retrieving \"meaning-equivalent sentences\" to overcome these two problems. A meaning-equivalent sentence shares the main meaning with an input despite lacking some unimportant information. The translations of meaning-equivalent sentences correspond to \"rough translations.\" The retrieval is based on content words, modality, and tense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Speech-to-speech translation (S2ST) technologies consist of speech recognition, machine translation (MT), and speech synthesis (Waibel, 1996; Wahlster, 2000; Yamamoto, 2000) . The MT part receives speech texts recognized by a speech recognizer. The nature of speech causes difficulty in translation since the styles of speech are different from those of written text and are sometimes ungrammatical (Lazzari, 2002) . Therefore, rule-based MT cannot translate speech accurately compared with its performance for written-style text .",
"cite_spans": [
{
"start": 127,
"end": 141,
"text": "(Waibel, 1996;",
"ref_id": "BIBREF17"
},
{
"start": 142,
"end": 157,
"text": "Wahlster, 2000;",
"ref_id": "BIBREF16"
},
{
"start": 158,
"end": 173,
"text": "Yamamoto, 2000)",
"ref_id": "BIBREF19"
},
{
"start": 399,
"end": 414,
"text": "(Lazzari, 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Example-based MT (EBMT) is one of the corpusbased machine translation methods. It retrieves examples similar to inputs and adjusts their translations to obtain the output (Nagao, 1981) . EBMT is a promising method for S2ST in that it performs robust translation of ungram-matical sentences and requires far less manual work than rule-based MT.",
"cite_spans": [
{
"start": 171,
"end": 184,
"text": "(Nagao, 1981)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, there are two problems in applying EBMT to S2ST. One is that the translation accuracy drastically drops as input sentences become long. As the length of a sentence becomes long, the number of retrieved similar sentences greatly decreases. This often results in no output when translating long sentences. The other problem arises due to the differences in style between input sentences and the example corpus. It is difficult to acquire a large volume of natural speech data since it requires much time and cost. Therefore, we cannot avoid using a corpus with written-style text, which is different from that of natural speech. This style difference makes retrieval of similar sentences difficult and degrades the performance of EBMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper proposes a method of retrieving sentences whose meaning is equivalent to input sentences to overcome the two problems. A meaning-equivalent sentence means a sentence having the main meaning of an input sentence despite lacking some unimportant information. Such a sentence can be more easily retrieved than a similar sentence, and its translation is useful enough in S2ST. We call this translation strategy example-based \"rough translation.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Retrieval of meaning-equivalent sentences is based on content words, modality, and tense. This provides robustness against long inputs and in the differences in style between the input and the example corpus. This advantage distinguishes our method from other translation methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We describe the difficulties in S2ST in Section 2. Then, we describe our purpose, features for retrieval, and retrieval method for meaning-equivalent sentences in Section 3. We report an experiment comparing our method with two other methods in Section 4. The experiment demonstrates the robustness of our method to length of input and the style differences between inputs and the example corpus. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A major problem with machine translation, regardless of the translation method, is that performance drops rapidly as input sentences become longer. For EBMT, the longer input sentences become, the fewer similar example sentences exist in the example corpus. Figure 1 shows translation difficulty in long sentences in EBMT (Sumita, 2001 ). The EBMT system is given 591 test sentences and returns translation result as translated/untranslated. Untranslated means that there exists no similar example sentences for the input. Although the EBMT is equipped with a large example corpus (about 170K sentences), it often failed to translate long inputs.",
"cite_spans": [
{
"start": 322,
"end": 335,
"text": "(Sumita, 2001",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 258,
"end": 266,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Translation Degradation by Input Length",
"sec_num": "2.1"
},
{
"text": "The performance of example-based S2ST greatly depends on the example corpus. It is advantageous for an example corpus to have a large volume and the same style as the input sentences. A corpus of texts dictated from conversational speech is favorable for S2ST. Unfortunately, it is very difficult to prepare such an example corpus since this task requires laborious work such as speech recording and speech transcription. Therefore, we cannot avoid using a written-style corpus, such as phrasebooks, to prepare a sufficiently large volume of examples. Contained texts are almost grammatical and rarely contain unnecessary words. We call the style used in such a corpus \"concise\" and the style seen in conversational speech \"conversational.\" Table 1 shows the average numbers of words in concise (Takezawa et al., 2002) and conversational corpora (Takezawa, 1999) . Sentences in conversational style are about 2.5 words longer than those in concise style in both Language English Japanese Concise 5.4 6.2 Conversational 7.9 8.9 Table 2 shows cross perplexity between concise and conversational corpora (Takezawa et al., 2002) . Perplexity is used as a metric for how well a language model derived from a training set matches a test set (Jurafsky and Martin, 2000) . Cross perplexities between concise and conversational corpora are much higher than the selfperplexity of either of the two styles. This result also illustrates the great difference between the two styles.",
"cite_spans": [
{
"start": 795,
"end": 818,
"text": "(Takezawa et al., 2002)",
"ref_id": "BIBREF12"
},
{
"start": 846,
"end": 862,
"text": "(Takezawa, 1999)",
"ref_id": "BIBREF13"
},
{
"start": 1101,
"end": 1124,
"text": "(Takezawa et al., 2002)",
"ref_id": "BIBREF12"
},
{
"start": 1235,
"end": 1262,
"text": "(Jurafsky and Martin, 2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 741,
"end": 748,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1027,
"end": 1034,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Style Differences between Concise and Conversational",
"sec_num": "2.2"
},
{
"text": "Example-based S2ST has the difficulties described in Section 2 when it attempts to translate inputs exactly. Here, we set our translation goal to translating input sentences not exactly but roughly. We assume that a rough translation is useful enough for S2ST, since unimportant information rarely disturbs the progress of dialogs and can be recovered in the following dialog if needed. We call this translation strategy \"rough translation.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meaning-equivalent Sentence",
"sec_num": "3"
},
{
"text": "We propose \"meaning-equivalent sentence\" to carry out rough translation. Meaning-equivalent sentences are defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meaning-equivalent Sentence",
"sec_num": "3"
},
{
"text": "A sentence that shares the main meaning with the input sentence despite lacking some unimportant information. It does not contain information additional to that in the input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "meaning-equivalent sentence (to an input sentence)",
"sec_num": null
},
{
"text": "Important information is subjectively recognized mainly due to one of two reasons: (1) It can be surmised from the general situation, or (2) It does not place a strong restriction on the main information. Figure 2 shows examples of unimportant/important information. Information to be examined is written in bold. The information \"of me\" in (1) and \"around here\" in (3) can be surmised from the general situation, while the information \"of this painting\" in (2) and \"Chinese\" would not be surmised since it denotes a special object. The subordinate sentences in (4) and (5) are regarded as unimportant since they have small significance and are omittable.",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 213,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "meaning-equivalent sentence (to an input sentence)",
"sec_num": null
},
{
"text": "The retrieval of meaning-equivalent sentence depends on content words and basically does not depend on functional words. Independence from functional words brings robustness to the difference in styles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Idea of Retrieval",
"sec_num": "3.1"
},
{
"text": "However, functional words include important information for sentence meaning: the case relation of content words, modality, and tense. Lack of case relation information is compensated by the nature of the restricted domain. A restricted domain, as a domain of S2ST, has a relatively small lexicon and meaning variety. Therefore, if content words included in an input are given, their relation is almost determined in the domain. Information of modality and tense is extracted from functional words and utilized in classifying the meaning of a sentence (described in Section 3.2.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Idea of Retrieval",
"sec_num": "3.1"
},
{
"text": "This retrieval method is similar to information retrieval in that content words are used as clues for retrieval (Frakes and Baeza-Yates, 1992 ). However, our task has two difficulties: (1) Retrieval is carried out not by documents but by single sentences. This reduces the effectiveness of word frequencies. (2) The differences in modality and tense in sentences have to be considered since they play an important role in determining a sentence's communicative meaning.",
"cite_spans": [
{
"start": 112,
"end": 141,
"text": "(Frakes and Baeza-Yates, 1992",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Idea of Retrieval",
"sec_num": "3.1"
},
{
"text": "Words categorized as either noun 1 , adjective, adverb, or verb are recognized as content words. Interrogatives We utilize a thesaurus to expand the coverage of the example corpus. We call the relation of two words that are the same \"identical\" and words that are synonymous in the given thesaurus \"synonymous.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Words",
"sec_num": "3.2.1"
},
{
"text": "The meaning of a sentence is discriminated by its modality and tense, since these factors obviously determine meaning. We defined two modality groups and one tense group by examining our corpus. The modality groups are (\"request\", \"desire\", \"question\", \"confirmation\", \"others\",) and (\"negation\", \"others\".) The tense group is (\"past\", \"others\".) These modalities and tense are distinguished by surface clues, mainly by particles and auxiliary verbs. A speech act is a concept similar to modality in which speakers' intentions are represented. The two studies introduced information of the speech act in their S2ST systems (Wahlster, 2000; Tanaka and Yokoo, 1999) . The two studies and our method differ in the effect of speech act information. Their effect of speech act information is so small that it is limited to generating the translation text. Translation texts are refined by selecting proper expressions according to the detected speakers' intention.",
"cite_spans": [
{
"start": 623,
"end": 639,
"text": "(Wahlster, 2000;",
"ref_id": "BIBREF16"
},
{
"start": 640,
"end": 663,
"text": "Tanaka and Yokoo, 1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modality and Tense",
"sec_num": "3.2.2"
},
{
"text": "Sentences that satisfy the conditions below are recognized as meaning-equivalent sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval and Ranking",
"sec_num": "3.3"
},
{
"text": "1. It is required to have the same modality and tense as the input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval and Ranking",
"sec_num": "3.3"
},
{
"text": "2. All content words are included (identical or synonymous) in the input sentence. This means that the set of content words of a meaning-equivalent sentence is a subset of the input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval and Ranking",
"sec_num": "3.3"
},
{
"text": "3. At least one content word is included (identical) in the input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval and Ranking",
"sec_num": "3.3"
},
{
"text": "If more than one sentence is retrieved, we must rank them to select the most similar one. We introduce \"focus area\" in the ranking process to select sentences that are meaning-equivalent to the main sentence in complex sentences. We set the focus area as the last N words from the word list of an input sentence. N denotes the number of content words in meaning-equivalent sentences. This is because main sentences in complex sentences tend to be placed at the end in Japanese. Retrieved sentences are ranked by the conditions described below. Conditions are described in order of priority. If there is more than one sentence having the highest score under these conditions, the most similar sentence is selected randomly. C1: # of identical words in focus area. C2: # of synonymous words in focus area. C3: # of identical words in non-focus area. C4: # of synonymous words in non-focus area. C5: # of common functional words. C6: # of different functional words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval and Ranking",
"sec_num": "3.3"
},
{
"text": "(the fewer, the higher priority) Figure 4 shows an example of conditions for ranking. Content word in a focus area of input are underlined and functional words are written in italic.",
"cite_spans": [],
"ref_spans": [
{
"start": 33,
"end": 41,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Retrieval and Ranking",
"sec_num": "3.3"
},
{
"text": "We used a bilingual corpus of travel conversation, which has Japanese sentences and their English translations (Takezawa et al., 2002) . This corpus was sentencealigned, and a morphological analysis was done on both languages by our morphological analysis tools. The bilingual corpus was divided into example data (Example) and test data (Concise) by extracting test data randomly from the whole set of data. In addition to this, we used a conversational speech corpus for another set of test data (Takezawa, 1999) . This corpus contains dialogs between a traveler and a hotel We use sentences including more than one content word among the three corpora. The statistics of the three corpora are shown in Table 4 .",
"cite_spans": [
{
"start": 111,
"end": 134,
"text": "(Takezawa et al., 2002)",
"ref_id": "BIBREF12"
},
{
"start": 498,
"end": 514,
"text": "(Takezawa, 1999)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 705,
"end": 712,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Test Data",
"sec_num": "4.1"
},
{
"text": "The thesaurus used in the experiment was \"Kadokawa-Ruigo-Jisho\" (Ohno and Hamanishi, 1984) . Each word has semantic code consisting of three digits, that is, this thesaurus has three hierarchies. We defined \"synonymous\" words as sharing exact semantic codes.",
"cite_spans": [
{
"start": 64,
"end": 90,
"text": "(Ohno and Hamanishi, 1984)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test Data",
"sec_num": "4.1"
},
{
"text": "We use two example-based retrieval methods to show the characteristic of the proposed method. The first method (Method-1) uses \"strict\" retrieval, which does not allow missing words in input. The method takes functional words into account on retrieval. This method corresponds to the conventional EBMT method. The second method (Method-2) uses \"rough\" retrieval, which does allow missing words in input, but still takes functional words into account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compared Retrieval Methods",
"sec_num": "4.2"
},
{
"text": "Evaluation was carried out by judging whether retrieved sentences are meaning-equivalent to inputs. It must be noted that inputs and retrieved sentences are both in Japanese. We did not compare inputs and translations of retrieved sentences, since translation accuracy is a matter of the example corpus and does not concern our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "4.3"
},
{
"text": "The sentence with the highest score among retrieved sentences was taken and evaluated. The sentences are marked manually as meaning-equivalent or not by a Japanese native. A meaning-equivalent sentence includes all important information in the input but may lack some unimportant information. Figure 5 shows the accuracy of the three methods with the concise and conversational style data. Accuracy is defined as the ratio of the number of correctly equivalent sentences to that of total inputs. Inputs are classified into four types by their word length.",
"cite_spans": [],
"ref_spans": [
{
"start": 293,
"end": 301,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "4.3"
},
{
"text": "The performance of Method-1 reflects the narrow coverage and style-dependency of conventional EBMT. The longer input sentences become, the more steeply its performance degrades in both styles. The method can retrieve no similar sentence for inputs longer than eleven words in conversational style.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Method-2 adopts a \"rough\" strategy in retrieval. It attains higher accuracy than Method-1, especially with longer inputs. This indicates the robustness of the rough retrieval strategy to longer inputs. However, the method still has an accuracy difference of about 15% between the two styles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "The accuracy of the proposed method is better than that of Method-2, especially in conversational style. The accuracy difference in longer inputs becomes smaller (about 4%) than that of Method-2. This indicates the robustness of the proposed method to the differences between the two styles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "The rough translation proposed in this paper is a type of EBMT (Sumita, 2001; Veale and Way, 1997; Carl, 1999; Brown, 2000) . The basic idea of EBMT is that sentences similar to the inputs are retrieved from an example corpus and their translations become the basis of outputs.",
"cite_spans": [
{
"start": 63,
"end": 77,
"text": "(Sumita, 2001;",
"ref_id": "BIBREF11"
},
{
"start": 78,
"end": 98,
"text": "Veale and Way, 1997;",
"ref_id": "BIBREF15"
},
{
"start": 99,
"end": 110,
"text": "Carl, 1999;",
"ref_id": "BIBREF2"
},
{
"start": 111,
"end": 123,
"text": "Brown, 2000)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EBMT",
"sec_num": "5.1"
},
{
"text": "Here, let us consider the difference between our method and other EBMT methods by dividing similarity into a content-word part and a functional-word part. In the content-word part, our method and other EBMT methods are almost the same. Content words are important information in a similarity measure process, and thesauri are utilized to extend lexical coverage. In the functional-word part, our method is characterized by disregarding functional words, while other EBMT methods still rely on them for the similarity measure. In our method, the lack of functional word information is compensated by the semantically narrow variety in S2ST domains and the use of information on modality and tense. Consequently, our method gains robustness to length and the style differences between inputs and the example corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EBMT",
"sec_num": "5.1"
},
{
"text": "Translation memory (TM) is aimed at retrieving informative translation example from example corpus. TM and our method share the retrieval strategy of rough and wide coverage. However, recall is more highly weighted than precision in TM, while recall and precision should be equally considered in our method. To carry out wide coverage retrieval, TM relaxed various conditions on inputs: Preserving only mono-gram and bi-gram on words/characters (Baldwin, 2001; Sato, 1992) , removing functional words (Kumano et al., 2002; Wakita et al., 2000) , and removing content words (Sumita and Tsutsumi, 1988) . In our method, information on functional words is removed and that on modality and tense is introduced instead. Information on word order is also removed while instead we preserve information on whether each word is located in the focus area.",
"cite_spans": [
{
"start": 445,
"end": 460,
"text": "(Baldwin, 2001;",
"ref_id": "BIBREF0"
},
{
"start": 461,
"end": 472,
"text": "Sato, 1992)",
"ref_id": "BIBREF9"
},
{
"start": 501,
"end": 522,
"text": "(Kumano et al., 2002;",
"ref_id": "BIBREF5"
},
{
"start": 523,
"end": 543,
"text": "Wakita et al., 2000)",
"ref_id": "BIBREF18"
},
{
"start": 573,
"end": 600,
"text": "(Sumita and Tsutsumi, 1988)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Memory",
"sec_num": "5.2"
},
{
"text": "In this paper, we introduced the idea of meaningequivalent sentences for robust example-based S2ST. Meaning-equivalent sentences have the same main meaning as the input despite lacking some unimportant information. Translation of meaning-equivalent sentences corresponds to rough translations, which aim not at exact translation with narrow coverage but at rough translation with wide coverage. For S2ST, we assume that this translation strategy is sufficiently useful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Then, we described a method for retrieving meaningequivalent sentences from an example corpus. Retrieval is based on content words, modality, and tense. This strategy is feasible owing to the restricted domains, often adopted in S2ST, which have relatively small variety in lexicon and meaning. An experiment demonstrated the robustness of our method to input length and the style differences between inputs and the example corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Most MT systems aim to achieve exact translation, but unfortunately they often output bad or no translation for long conversational speeches. The rough translation proposed in this paper achieves robustness in translation for such inputs. This method compensates for the shortcomings of conventional MT and makes S2ST technology more practical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Japanese content words are written in sans serif style and Japanese functional words in italic style.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Space characters are inserted into word boundaries in Japanese texts.4 The value \"others\" in all modality/tense groups is omitted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Words are converted to base form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research reported here was supported in part by a contract with the Telecommunications Advancement Organization of Japan entitled, \"A study of speech dialogue translation technology based on a large corpus\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Low-cost, high-performance translation retrieval: Dumber is better",
"authors": [
{
"first": "T",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of the 39th ACL",
"volume": "",
"issue": "",
"pages": "18--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Baldwin. 2001. Low-cost, high-performance transla- tion retrieval: Dumber is better. In Proc. of the 39th ACL, pages 18-25.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automated generalization of translation examples",
"authors": [
{
"first": "R",
"middle": [
"D"
],
"last": "Brown",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of the 18th COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. D. Brown. 2000. Automated generalization of trans- lation examples. In Proc. of the 18th COLING.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Inducing translation templates for example-based machine translation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Carl",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of the MT Summit VII",
"volume": "",
"issue": "",
"pages": "250--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Carl. 1999. Inducing translation templates for example-based machine translation. In Proc. of the MT Summit VII, pages 250-258.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Information Retrieval Data Structures & Algorithms",
"authors": [
{
"first": "W",
"middle": [
"B"
],
"last": "Frakes",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Baeza-Yates",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. B. Frakes and R. Baeza-Yates, editors. 1992. Infor- mation Retrieval Data Structures & Algorithms. Pren- tice Hall.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Speech and Language Processing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Jurafsky and J. H. Martin, editors. 2000. Speech and Language Processing. Prentice Hall.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A translation aid system by retrieving bilingual news database",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kumano",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Uratani",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ehara",
"suffix": ""
}
],
"year": 2002,
"venue": "System and Computers in Japan",
"volume": "",
"issue": "",
"pages": "19--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Kumano, I. Goto, H. Tanaka, N. Uratani, and T. Ehara. 2002. A translation aid system by retrieving bilingual news database. In System and Computers in Japan, pages 19-29.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The V1 framework program in Europe: Some thoughts about speech to speech translation research",
"authors": [
{
"first": "G",
"middle": [],
"last": "Lazzari",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of 40th ACL Workshop on Speech-to-Speech Translation",
"volume": "",
"issue": "",
"pages": "129--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Lazzari. 2002. The V1 framework program in Eu- rope: Some thoughts about speech to speech trans- lation research. In Proc. of 40th ACL Workshop on Speech-to-Speech Translation, pages 129-135.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A framework of a mechanical translation between Japanese and English by analogy principle",
"authors": [
{
"first": "M",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1981,
"venue": "Artificial and Human Intelligence",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Nagao. 1981. A framework of a mechanical transla- tion between Japanese and English by analogy princi- ple. In Artificial and Human Intelligence, pages 173- 180.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "CTM: An example-based translation aid system",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sato",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. of the 14th COLING",
"volume": "",
"issue": "",
"pages": "1259--1263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Sato. 1992. CTM: An example-based translation aid system. In Proc. of the 14th COLING, pages 1259- 1263.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A translation aid system using flexible text retrieval based on syntaxmatching",
"authors": [
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Tsutsumi",
"suffix": ""
}
],
"year": 1988,
"venue": "TRL Research Report TR87-1019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Sumita and Y. Tsutsumi. 1988. A translation aid system using flexible text retrieval based on syntax- matching. In TRL Research Report TR87-1019. IBM Tokyo Research Laboratory.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Example-based machine translation using DP-matching between work sequences",
"authors": [
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of the ACL 2001 Workshop on Data-Driven Methods in Machine Translation",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Sumita. 2001. Example-based machine translation using DP-matching between work sequences. In Proc. of the ACL 2001 Workshop on Data-Driven Methods in Machine Translation, pages 1-8.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the 3rd LREC",
"volume": "",
"issue": "",
"pages": "147--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto. 2002. Toward a broad-coverage bilin- gual corpus for speech translation of travel conversa- tions in the real world. In Proc. of the 3rd LREC, pages 147-152.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Building a bilingual travel conversation database for speech translation research",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of the 2nd international workshop on East-Asian resources and evaluation conference on language resources and evaluation",
"volume": "",
"issue": "",
"pages": "17--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Takezawa. 1999. Building a bilingual travel con- versation database for speech translation research. In Proc. of the 2nd international workshop on East-Asian resources and evaluation conference on language re- sources and evaluation, pages 17-20.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An efficient statistical speech act type tagging system for a speech translation systems",
"authors": [
{
"first": "H",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Yokoo",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "381--388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Tanaka and A. Yokoo. 1999. An efficient statistical speech act type tagging system for a speech translation systems. In Proc. of the Association for Computational Linguistics, pages 381-388.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Gaijin: A bootstrapping, template-driven approach to example-based MT",
"authors": [
{
"first": "T",
"middle": [],
"last": "Veale",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of the NeMNLP97",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Veale and A. Way. 1997. Gaijin: A bootstrapping, template-driven approach to example-based MT. In Proc. of the NeMNLP97.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Verbmobil: Foundations of Speech-to-Speech Translation",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wahlster",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Wahlster, editor. 2000. Verbmobil: Foundations of Speech-to-Speech Translation. Springer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Interactive translation of conversational speech",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 1996,
"venue": "IEEE Computer",
"volume": "29",
"issue": "7",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Waibel. 1996. Interactive translation of conversa- tional speech. IEEE Computer, 29(7):41-48.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Fine keyword clustering using a thesaurus and example setences for speech translation",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Wakita",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Matsui",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sagisaka",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of International Conference of Speech Language Processing",
"volume": "",
"issue": "",
"pages": "390--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Wakita, K. Matsui, and Y. Sagisaka. 2000. Fine keyword clustering using a thesaurus and example se- tences for speech translation. In Proc. of International Conference of Speech Language Processing, pages 390-393.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Toward speech communications beyond language barrier -research of spoken language translation technologies at ATR",
"authors": [
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of ICSLP",
"volume": "4",
"issue": "",
"pages": "406--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Yamamoto. 2000. Toward speech communications beyond language barrier -research of spoken language translation technologies at ATR -. In Proc. of ICSLP, volume 4, pages 406-411.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Distribution of Untranslated Inputs by Length 2 Difficulty in Example-based S2ST"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Sentences and their Modality and Tense sample sentences and their modality and tense. Clues are underlined."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Example of Conditions for Ranking"
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Figure 5: Results"
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">: Number of Words by Sentences</td></tr><tr><td/><td colspan=\"2\">Language Model</td></tr><tr><td/><td colspan=\"2\">Concise Conversational</td></tr><tr><td>Concise Test Conversational</td><td>16.4 72.3</td><td>58.3 16.3</td></tr></table>",
"html": null,
"num": null,
"text": ""
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>: Cross Perplexity</td></tr><tr><td>English and Japanese. This is because conversational</td></tr><tr><td>style sentences contain unnecessary words or subordinate</td></tr><tr><td>clauses, which have the effects of assisting the listener's</td></tr><tr><td>comprehension and avoiding the possibility of giving the</td></tr><tr><td>listener a curt impression.</td></tr></table>",
"html": null,
"num": null,
"text": ""
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": ""
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td>Sentence 3</td><td>Modality &amp; Tense 4</td></tr><tr><td>hoteru wo yoyaku shi tekudasai</td><td>request</td></tr><tr><td>(Will you reserve this hotel?)</td><td/></tr><tr><td>hoteru wo yoyaku shi tai</td><td>desire</td></tr><tr><td>(I want to reserve this hotel.)</td><td/></tr><tr><td>hoteru wo yoyaku shi mashi ta ka?</td><td>question</td></tr><tr><td>(Did you reserve this hotel?)</td><td>past</td></tr><tr><td>hoteru wo yoyaku shi tei masen</td><td>negation</td></tr><tr><td>(I do not reserve this hotel.)</td><td/></tr></table>",
"html": null,
"num": null,
"text": "shows a part of the clues used for discriminating modalities in Japanese. Sentences having no clues are classified as others.Figure 3 2 shows"
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Meaning-equivalent Sentence</td></tr><tr><td>baggu wo nusuma re ta</td><td/></tr><tr><td>(My bag was stolen).</td><td/></tr><tr><td>C1 nusumu 5</td><td>1</td></tr><tr><td>C2 ( kaban = baggu )</td><td>1</td></tr><tr><td>C3 -</td><td>0</td></tr><tr><td>C4 -</td><td>0</td></tr><tr><td>C5 wo, re, ta</td><td>3</td></tr><tr><td>C6 suru, teiru, ni, masu</td><td>4</td></tr></table>",
"html": null,
"num": null,
"text": "Inputgaishutsu shi teiru aida ni, (While I was out), kaban wo nusuma re mashi ta (my baggage was stolen.)"
},
"TABREF7": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Statistics of the Corpora receptionist. It tests the robustness in styles. We call this test corpus \"Conversational.\""
}
}
}
}