ACL-OCL / Base_JSON /prefixY /json /Y95 /Y95-1025.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y95-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:38:45.781422Z"
},
"title": "Using Brackets to Improve Search for Statistical Machine Translation",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hong Kong University of Science & Technology Clear Water Bay",
"location": {
"settlement": "Hong Kong"
}
},
"email": ""
},
{
"first": "Cindy",
"middle": [],
"last": "Ng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hong Kong University of Science & Technology Clear Water Bay",
"location": {
"settlement": "Hong Kong"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a method to improve search time and space complexity in statistical machine translation architectures, by employing linguistic bracketing information on the source language sentence. It is one of the advantages of the probabilistic formulation that competing translations may be compared and ranked by a principled measure, but at the same time, optimizing likelihoods over the translation space dictates heavy search costs. To make statistical architectures practical, heuristics to reduce search computation must be incorporated. An experiment applying our method to a prototype Chinese-English translation system demonstrates substantial improvement.",
"pdf_parse": {
"paper_id": "Y95-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a method to improve search time and space complexity in statistical machine translation architectures, by employing linguistic bracketing information on the source language sentence. It is one of the advantages of the probabilistic formulation that competing translations may be compared and ranked by a principled measure, but at the same time, optimizing likelihoods over the translation space dictates heavy search costs. To make statistical architectures practical, heuristics to reduce search computation must be incorporated. An experiment applying our method to a prototype Chinese-English translation system demonstrates substantial improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The work we discuss here is embedded within the SILC project at HKUST (Wu 1994; Fung Wu 1994; Wu & Fung 1994; Wu & Xia 1995; Wu 1995a; Wu 1995b; Wu 1995c) which focuses on problems of machine translation learning. We are developing machine learning techniques to bear upon the shortage of adequate knowledge resources for natural language analysis, particularly for Chinese where there is relatively little previous computational linguistics research from which to draw. It is one of our objectives to investigate the suitability for Chinese of the statistical translation model originally proposed by IBM (Brown et al. 1990; Brown et al. 1993) for Indo-European languages. Henceforth we will therefore use \"Chinese\" to refer to the source language and \"English\" to refer to the target language, reflecting the . prototype SILC system.",
"cite_spans": [
{
"start": 70,
"end": 79,
"text": "(Wu 1994;",
"ref_id": "BIBREF8"
},
{
"start": 80,
"end": 93,
"text": "Fung Wu 1994;",
"ref_id": "BIBREF12"
},
{
"start": 94,
"end": 109,
"text": "Wu & Fung 1994;",
"ref_id": "BIBREF12"
},
{
"start": 110,
"end": 124,
"text": "Wu & Xia 1995;",
"ref_id": "BIBREF13"
},
{
"start": 125,
"end": 134,
"text": "Wu 1995a;",
"ref_id": "BIBREF9"
},
{
"start": 135,
"end": 144,
"text": "Wu 1995b;",
"ref_id": "BIBREF10"
},
{
"start": 145,
"end": 154,
"text": "Wu 1995c)",
"ref_id": "BIBREF11"
},
{
"start": 606,
"end": 625,
"text": "(Brown et al. 1990;",
"ref_id": null
},
{
"start": 626,
"end": 644,
"text": "Brown et al. 1993)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An inherent characteristic of the basic IBM stochastic channel model is the large search space, due to the wide range of distortions that must be allowed in order to successfully transfer sentences of one language to the other. The underlying generative model maps target\u2022language strings into source-language strings (i.e., in the reverse direction from translation). During translation, a maximum likelihood target\u2022language string is sought for the input source-language string, according to Bayes' formula: (1) argmax Pr(elc) = argmax Pr(cle) Pr(e) e e",
"cite_spans": [
{
"start": 510,
"end": 513,
"text": "(1)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The distortion operations in the channel model are chosen to permit sufficient flexibility to map English strings into Chinese translations that have greatly different word order. (It is a simplifying assumption of the model that the only sentence translations considered are those where the majority of words can be translated by lexical substitution.) The scheme admits many implausible mappings along with the legitimate translations, but thereby gains robustness. During the recognition process, legitimate translations will be selected so long as the implausible mappings have lower likelihoods. The IBM model employs an A* search strategy on the space of translation hypotheses using incremental hypothesis expansion. The distance-to-goal heuristic is not admissible but reasonable estimates can be made yielding good performance. This approach arguably provides the highest possible accuracy assuming that no additional information is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In reality, however, additional information can usually be made available. The method we propose here exploits one such type of information, namely, that a preprocessing stage can be used to annotate the input source-language sentence with a syntactic bracketing. We will not dwell on the bracketing method here; numerous approaches for automatic bracketing have been developed, including strategies employing full grammars, local patterns, and information-theoretic metrics. Work on Chinese parsing (Jiang 1985; Zhou & Chang 1986; Lum & Pun 1988; Lee & Hsu 1991; Lee et al. 1992) would be particularly applicable here.",
"cite_spans": [
{
"start": 500,
"end": 512,
"text": "(Jiang 1985;",
"ref_id": "BIBREF4"
},
{
"start": 513,
"end": 531,
"text": "Zhou & Chang 1986;",
"ref_id": "BIBREF14"
},
{
"start": 532,
"end": 547,
"text": "Lum & Pun 1988;",
"ref_id": "BIBREF7"
},
{
"start": 548,
"end": 563,
"text": "Lee & Hsu 1991;",
"ref_id": "BIBREF6"
},
{
"start": 564,
"end": 580,
"text": "Lee et al. 1992)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The translation system employs two main sets of learned parameters corresponding to the two factors on the right side of Equation 1: the language model and the translation model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Translation Model",
"sec_num": "2"
},
{
"text": "Parameters for the translation model consist of (1) translation probabilities Pr(cle) which describe bilingual lexical correspondences in terms of the probability that a given English word e translates into a Chinese word c, and (2) alignment probabilities Pr(ai lj,l, m) which crudely describe word order variation in terms of the probability that a word in position j of a length-m Chinese sentence corresponds to a word in position a, of a corresponding length-/ English translation. The translation and alignment probabilities are automatically estimated by an iterative expectation-maximization algorithm (Wu & Xia 1995) , using as training data a parallel bilingual corpus containing parliamentary transcripts from the Hong Kong Legislative Council which are available in both English and Chinese versions. The size of the training corpus was approximately 17.9Mb of raw English text and 9.6Mb of corresponding raw Chinese translation, or about 3 million English words, and approximately 3.2 million Chinese words (under certain Chinese segmentation assumptions). Since these proceedings were not originally available in machine-analyzable form, it was necessary to carry out data conversion and reformatting using manual and automatic processing, and then to perform automatic sentence alignment (Wu 1994) .",
"cite_spans": [
{
"start": 610,
"end": 625,
"text": "(Wu & Xia 1995)",
"ref_id": "BIBREF13"
},
{
"start": 1303,
"end": 1312,
"text": "(Wu 1994)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Translation Model",
"sec_num": "2"
},
{
"text": "Parameters for the English language model, on the other hand, were estimated from a much larger monolingual corpus to reduce sparse data problems. About 280Mb of text from the Wall Street Journal were used to to obtain a bigram model with the parameters are Pr(ei (ei _ i ), under a vocabulary restriction to match the translation lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Translation Model",
"sec_num": "2"
},
{
"text": "Given the parameters, translation of a test sentence in Chinese is performed by a search to solve Equation 1. In our baseline system, we employ a beam search algorithm, a variation of A* with a thresholded agenda width.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Translation Model",
"sec_num": "2"
},
{
"text": "In the baseline model, the coupling between words of the test sentence is ignored. The search process considers each of the input tokens as an individual word. In reality, however, often there exist known relations between individual words, as for example in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Bracketing Constraints",
"sec_num": "3"
},
{
"text": "JA p IN* o ), where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Bracketing Constraints",
"sec_num": "3"
},
{
"text": "nx is a noun phrase in which is a measuring element to describe A. Thus we would not expect the translations of these two tokens to be separated far apart in the target output. Again, in (ftill Z tr, IJ o ), we consider (13E tJ f1.14) a phrase to be translated as a unit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Bracketing Constraints",
"sec_num": "3"
},
{
"text": "The search strategy we propose accepts any available bracketing information, full or partial. The bracketing information is used to partition the search in divide-and-conquer fashion. Innermost constituents are translated first, then assembled compositionally into larger constituents. Within any level of bracketing, an A* search is performed. The merits of the bracket-guided search strategy can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Bracketing Constraints",
"sec_num": "3"
},
{
"text": "1. Use of divide-and-conquer. The problem of finding a complete English translation is recursively decomposed into sub-problems of finding translations of substrings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Bracketing Constraints",
"sec_num": "3"
},
{
"text": "2. Independence of syntactic knowledge. While it is true that the bracketing preprocessor may utilize syntactic knowledge, such knowledge is not used by the search algorithm itself. Moreover, the brackets do not carry syntactic category labels. Thus if alternative non-syntactic (e.g., statistical) bracketing strategies are available, the proposed algorithm can be deployed without any grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Bracketing Constraints",
"sec_num": "3"
},
{
"text": "The spirit of the statistical approach with respect to robustness is preserved. At one extreme, given a complete bracketing of the input sentence, the solution of the sub-problems immediately yields the solution to the original problem. At the other extreme, if no brackets are given (or equivalently, each individual input token is bracketed by itself), the algorithm simply degenerates into the baseline model. In between the extremes, the search is guided heuristically as in the baseline model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preservation of robustness.",
"sec_num": "3."
},
{
"text": "Our search algorithm dictates that nodes in the lower levels (those with higher level numbers) of the tree of c must be processed before nodes in the higher levels. In Figure 1 , we have five subtrees labeled S1, S2 S3 S4, and S (which is the whole sentence). subtree 54 is processed first, followed arbitrarily by Si , S2 or S3. If we assume the subtrees Si and then S3 are processed next, the intermediate result will be as shown in Figure 2 , where Pi, P2 and P3 hold English substrings. Thus at any point during the search, a subtree may consist of:",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 176,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 435,
"end": 443,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Preservation of robustness.",
"sec_num": "3."
},
{
"text": "1. Chinese tokens only. In this case, the sub-search is identical to that in the baseline system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preservation of robustness.",
"sec_num": "3."
},
{
"text": "All lexical translations have been made; it may still remain to align the English substrings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English substrings only.",
"sec_num": "2."
},
{
"text": "A mixture of Chinese tokens and English substrings. This is analogous to a partial hypothesis in the baseline model where some of the English words have been translated. As above, the English substrings may still need to be aligned. In addition the Chinese tokens must still be translated and aligned. We impose an additional assumption: the available English substrings are aligned prior to continuing the search on Chinese token translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "The search algorithm follows the general schema below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "\u2022 While unprocessed nodes in the Chinese tree remain, choose an unprocessed subtree at the deepest remaining level, and replace Si with its translation computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "1. Create hypothesis nodes in the search tree representing alternative target lengths 1 for the output English phrases P that might be translations of Si. 2. Arrange the search order of any previously computed English substrings under Si according to their length-normalized joint probability g = Pr(e) Pr(c, ale).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "3. While any previously computed English substrings of the subtree remain to be processed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "(a) Let p* be the remaining English substring with largest value of g. Expand the hypothesis space to include the set of hypotheses that include p* (each hypothesis corresponds to mapping p* to a different location in P). Calculate ft) for each hypothesis. 4. (At this point the subtree consists of Chinese tokens only.) Initialize a set of hypotheses using the translation probabilities: for each Chinese word c3 in Si, find all English words e such that Pr(c; le) is non-zero. Arrange their search order according to their Pr(ci le) value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "5. While any Chinese tokens remain to be processed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "(a) Expand the hypothesis with the maximum remaining Pr(c; le) value. Generate subhypotheses that associate alternative positions aj for the English word e. Calculate id, for each hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "6. (At this point all Chinese in the subtree has been eliminated.) For each hypothesis:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "(a) While empty positions in the output string remain: i. Fill in the empty positions using the bigram probabilities Pr(e i lei_ i ) from the language model, and calculate iv.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "We have tested our model with both natural test cases (from the Hong Kong Hansard) as well as synthetic ones. The synthetic cases are artifically constructed using the natural corpus vocabulary. Only noun phrases and verb phrases were bracketed, using the following simple pattern templates:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 NP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "1. two consecutive nouns, e.g. I1 A \u20ac; or 2. an adjective + a noun, e.g. WM SU; or 3. two nouns with the word n in between, e.g. 41 1:11 J 114; or 4. an adjective + a NP, e.g. Ma su BM; or 5. two NPs with the word riti in between, e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In addition, each of the above NP forms allows insertion of a measuring phrase of the form \"(specifier) + (number) + (unit)\" where the parentheses denote optionality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "rftj rog",
"sec_num": null
},
{
"text": "\u2022 VP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "rftj rog",
"sec_num": null
},
{
"text": "1. a verb a noun, e.g. VIIP *V; or 2. a verb + a NP, e.g. mg! **. Erti",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "rftj rog",
"sec_num": null
},
{
"text": "As a measure of efficiency, the average number of nodes in the search tree for each strategy was recorded. Table 1 shows the average number of nodes in the search tree expanded per test case for both the baseline and bracketing strategies, with a significant reduction in the search cost. Two example test sentences are shown in the Appendix. For both the cases with and without bracketing on each test sentence input, the top five output candidate translations are shown, along with their log probabilities. In addition to improving efficiency, the bracketing strategy simultaneously achieves higher accuracy as summarized in the tables below. The correctness criteria for the two sets of test cases are a bit different, as the outputs from the synthetic set do not have any reference translations to serve as an evaluation standard.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 114,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "rftj rog",
"sec_num": null
},
{
"text": "For the natural test cases from the corpus, a translation is considered:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "rftj rog",
"sec_num": null
},
{
"text": "1. Correct if it is exactly the same as the translation made in the bilingual corpus, or conveys the same meaning as that in the bilingual corpus;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "rftj rog",
"sec_num": null
},
{
"text": "2. Partially Correct if it conveys more or less the same meaning as that in the bilingual corpus and is grammatically incorrect; ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "rftj rog",
"sec_num": null
},
{
"text": "In most systems only partial bracketing information will be available since full-coverage grammars are not robust. The degree of bracketing affects performance as follows. A minimallybracketed sentence, where there is only one pair of brackets enclosing the entire sentence, reduces to the original A* search. On the other hand, a fully-bracketed sentence offers the least room for variation in the translation hypotheses, and dictates clausal translation at every level of the phrase structure. Thus speed will be maximally enhanced, but robustness will be minimized. Because of these properties, it is best to bias the bracketer conservatively, i.e., to commit to a pair of brackets only when certain. This study underlines the effectiveness of combining linguistic analysis with statistical corpus-based techniques for practical applications such as machine translation. A conservative use of linguistic analysis improves both speed and accuracy, while maintaining the robustness and broad coverage of statistical methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "Sentence 1, unbracketed:, ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "A statistical approach to machine translation",
"authors": [
{
"first": "",
"middle": [],
"last": "Roossin",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "2",
"pages": "29--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ROOSSIN. 1990. A statistical approach to machine translation. Computational Linguis- tics, 16(2):29-85.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "Peter",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stephen A",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "BROWN, PETER F., STEPHEN A. DELLAPIETRA, VINCENT J. DELLAPIETRA, ROBERT L. MERCER. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Statistical augmentation of a Chinese machine-readable dictionary",
"authors": [
{
"first": "Fli",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pascale & Dekai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Second Annual Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "69--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "FLI NG, PASCALE & DEKAI WU. 1994. Statistical augmentation of a Chinese machine-readable dictionary. In Proceedings of the Second Annual Workshop on Very Large Corpora, 69-85, Kyoto.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Chinese Parsing: An Initial. Exploration at LRC. Computer Processing of Chinese and Oriental Languages",
"authors": [
{
"first": "Y",
"middle": [
"P"
],
"last": "Jiang",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "2",
"issue": "",
"pages": "127--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "JIANG, Y.P. 1985. Chinese Parsing: An Initial. Exploration at LRC. Computer Processing of Chinese and Oriental Languages, 2(2):127-138.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Parsing Chinese Nominalizations based on HPSG",
"authors": [
{
"first": "H",
"middle": [
"J"
],
"last": "Lee",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Dai",
"suffix": ""
},
{
"first": "Y",
"middle": [
"S"
],
"last": "Chang",
"suffix": ""
}
],
"year": 1992,
"venue": "Computer Processing of Chinese and Oriental Languages",
"volume": "6",
"issue": "2",
"pages": "143--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LEE, H.J., J.C. DAI, Y.S. CHANG. 1992. Parsing Chinese Nominalizations based on HPSG. Computer Processing of Chinese and Oriental Languages, 6(2):143-158.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Parsing Chinese Sentences in a Unification-based Grammar",
"authors": [
{
"first": "H",
"middle": [
"J P R"
],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 1991,
"venue": "Computer Processing of Chinese and Oriental Languages",
"volume": "5",
"issue": "3-4",
"pages": "271--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LEE, H.J. & P.R. Hsu. 1991. Parsing Chinese Sentences in a Unification-based Grammar. Computer Processing of Chinese and Oriental Languages, 5(3-4):271-284.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On Parsing Complex Noun Phrases in a Chinese Sentence",
"authors": [
{
"first": "B",
"middle": [
"K H"
],
"last": "Lum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pun",
"suffix": ""
}
],
"year": 1988,
"venue": "1988 International Conference on Computer Processing of Chinese and Oriental Languages. Proceedings",
"volume": "",
"issue": "",
"pages": "470--474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lum, B. & K.H. PUN. 1988. On Parsing Complex Noun Phrases in a Chinese Sentence. In 1988 International Conference on Computer Processing of Chinese and Oriental Lan- guages. Proceedings, 470-474.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Aligning a parallel English-Chinese corpus statistically with lexical criteria",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd Annual Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "80--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, DEKAI. 1994. Aligning a parallel English-Chinese corpus statistically with lexical crite- ria. In Proceedings of the 32nd Annual Conference of the Association for Computational Linguistics, 80-87, Las Cruces, New Mexico.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An algorithm for simultaneously bracketing parallel texts by aligning words",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "244--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, DEKAI. 1995a. An algorithm for simultaneously bracketing parallel texts by aligning words. In Proceedings of the 33rd Annual Conference of the Association for Computa- tional Linguistics, 244-251, Cambridge, Massachusetts.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Stochastic inversion transduction grammars, with application to segmentation, bracketing, and alignment of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of IJCA 1-95, Fourteenth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, DEKAI. 1995b. Stochastic inversion transduction grammars, with application to seg- mentation, bracketing, and alignment of parallel corpora. In Proceedings of IJCA 1-95, Fourteenth International Joint Conference on Artificial Intelligence, Montreal. To ap- pear.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Trainable coarse bilingual grammars for parallel text bracketing",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Third Annual Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "69--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "WU, DEKAI. 1995c. Trainable coarse bilingual grammars for parallel text bracketing. In Proceedings of the Third Annual Workshop on Very Large Corpora, 69-81, Cambridge, Massachusetts.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving Chinese tokenization with linguistic filters on statistical lexical acquisition",
"authors": [
{
"first": "Dekai & Pascale",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Fourth Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "180--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "WU, DEKAI & PASCALE FUNG. 1994. Improving Chinese tokenization with linguistic filters on statistical lexical acquisition. In Proceedings of the Fourth Conference on Applied Natural Language Processing, 180-181, Stuttgart.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Large-scale automatic extraction of an English-Chinese lexicon. Machine Translation",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xuanyin Xia",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "WU, DEKAI XUANYIN XIA. 1995. Large-scale automatic extraction of an English-Chinese lexicon. Machine Translation. To appear.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Methodology for Deterministic Chinese Parsing",
"authors": [
{
"first": "J",
"middle": [
"Y S K"
],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 1986,
"venue": "Computer Processing of Chinese and Oriental Languages",
"volume": "2",
"issue": "3",
"pages": "139--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ZHOU, J.Y. S.K. CHANG. 1986. A Methodology for Deterministic Chinese Parsing. Computer Processing of Chinese and Oriental Languages, 2(3):139-161.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deputy(NULL) President(I g) ,(,) I(T-1) have() these()11) figures(!( 2. -1.53207 Sir(tt.) ,(,) these(I l L) figures(n) I(R) have(4)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "-1.40294 Mr(3M) Deputy(NULL) President(I g) ,(,) I(T-1) have() these()11) figures(!( 2. -1.53207 Sir(tt.) ,(,) these(I l L) figures(n) I(R) have(4)",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Mr(5M) Deputy(NULL) President(IPS) I(N) have(",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mr(5M) Deputy(NULL) President(IPS) I(N) have() these(14, t) figures* 4. -1.63503 Sir(IM) I(N) have(4) these(te,) figures(EZ) )",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Deputy(NULL) President(IS) I(R) have(,) these(ta) figures",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "-1.68973 Mr(5tt.) Deputy(NULL) President(IS) I(R) have(,) these(ta) figures(11/ ) )",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "tantamount(ga) no0k) need(NULL) deanOM and(NULL) environmental( campaign",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "-6.61285 tantamount(ga) no0k) need(NULL) deanOM and(NULL) environmental( campaign(10) )",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "tantamount(ga) clean(71X) and(NULL) not() contravene(NULL) environmental( atm) campaign(ffi) )",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "2. -6.61400 tantamount(ga) clean(71X) and(NULL) not() contravene(NULL) environmental( atm) campaign(ffi) )",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "not(*) to(NULL) clean( X) tantamount(ga) environmentalMa* iff) campaignOW )",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "-6.97496 does() not(*) to(NULL) clean( X) tantamount(ga) environmentalMa* iff) campaignOW )",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "no(*) need(NULL) clean(IN) tantamount(4a) environmental(Ma campaign",
"authors": [],
"year": null,
"venue": "",
"volume": "16",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "-7.18585 does() no(*) need(NULL) clean(IN) tantamount(4a) environmental(Ma campaign(;16) )",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "clean() and(NULL) not() tantamount(4a) environmentalMaig g() campaign(iffh) )",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "-7.20299 no(10 clean() and(NULL) not() tantamount(4a) environmentalMaig g() campaign(iffh) )",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sentence 2, bracketed: ( irta4;ra Nth",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sentence 2, bracketed: ( irta4;ra Nth))",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "38424 Eprortgint) should(NULL) not(*) tantamount(ga) to(NULL) clean(ig) campaigns(ffft)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "-4.38424 Eprortgint) should(NULL) not(*) tantamount(ga) to(NULL) clean(ig) campaigns(ffft) .(0",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "EPD(ilaig) should(NULL) not() tantamount(4a) to(NULL) clean(MX) campaignOW )",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "-4.40295 EPD(ilaig) should(NULL) not() tantamount(4a) to(NULL) clean(MX) campaignOW )",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Protection(riagN) is(NULL) not(*) tantamount(4a) to(NULL) cleanOIX campaigns(ffth) )",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "-4.41738 Protection(riagN) is(NULL) not(*) tantamount(4a) to(NULL) cleanOIX campaigns(ffth) )",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "EPD(Olafggi) should(NULL) not() tantamount(4a) clean(Mg) up(NULL) campaigns(M) )",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "-4.42583 EPD(Olafggi) should(NULL) not() tantamount(4a) clean(Mg) up(NULL) campaigns(M) )",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Protection(AMM) is(NULL) not() tantamount(4a) to(NULL) cleanein campaign",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "-4.43609 Protection(AMM) is(NULL) not() tantamount(4a) to(NULL) cleanein campaign(10) )",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Example bracket structure of a test sentence c.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Bracket structure of an intermediate sentence translation hypothesis, where subtrees Sl, S3, and S4 of Figure 1 have been processed.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"num": null,
"content": "<table><tr><td colspan=\"3\">Corpus Test Cases</td><td colspan=\"3\">Synthetic Test Cases</td></tr><tr><td colspan=\"6\">Baseline Bracketing % reduction Baseline Bracketing % reduction</td></tr><tr><td>443819</td><td>309860</td><td>30.2</td><td>434351</td><td>346702</td><td>20.2</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Average number of nodes in the search tree per bracketed test case"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>3. Not Correct otherwise.</td><td/><td/><td/><td/><td/></tr><tr><td>Category</td><td/><td colspan=\"4\">Baseline Percent Bracketing Percent</td></tr><tr><td>Correct</td><td/><td>10</td><td>25.7</td><td>13</td><td>33.3</td></tr><tr><td>Partial Correct</td><td/><td>21</td><td>53.8</td><td>20</td><td>51.3</td></tr><tr><td>Not Correct</td><td/><td>8</td><td>20.5</td><td>6</td><td>15.4</td></tr><tr><td>Total</td><td>a</td><td>39</td><td>100.0</td><td>39</td><td>100.0</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Results with test cases from corpusFor the synthetic test cases, a translation is considered:1. Correct if it is an acceptable translation judged by a human evaluator;2. Partially Correct if it conveys part of the meaning of the original sentence;"
},
"TABREF3": {
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Results with synthetic test cases"
}
}
}
}