| { |
| "paper_id": "P92-1023", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:11:56.057701Z" |
| }, |
| "title": "GPSM: A GENERALIZED PROBABILISTIC SEMANTIC MODEL FOR AMBIGUITY RESOLUTION", |
| "authors": [ |
| { |
| "first": "Tjing-Shin", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Tsing Hua University Hsinchu", |
| "location": { |
| "postCode": "30043", |
| "region": "R.O.C", |
| "country": "TAIWAN" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Yih-Fen", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Tsing Hua University Hsinchu", |
| "location": { |
| "postCode": "30043", |
| "region": "R.O.C", |
| "country": "TAIWAN" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Yih", |
| "middle": [], |
| "last": "Su", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Tsing Hua University Hsinchu", |
| "location": { |
| "postCode": "30043", |
| "region": "R.O.C", |
| "country": "TAIWAN" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In natural language processing, ambiguity resolution is a central issue, and can be regarded as a preference assignment problem. In this paper, a Generalized Probabilistic Semantic Model (GPSM) is proposed for preference computation. An effective semantic tagging procedure is proposed for tagging semantic features. A semantic score function is derived based on a score function, which integrates lexical, syntactic and semantic preference under a uniform formulation. The semantic score measure shows substantial improvement in structural disambiguation over a syntax-based approach.", |
| "pdf_parse": { |
| "paper_id": "P92-1023", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In natural language processing, ambiguity resolution is a central issue, and can be regarded as a preference assignment problem. In this paper, a Generalized Probabilistic Semantic Model (GPSM) is proposed for preference computation. An effective semantic tagging procedure is proposed for tagging semantic features. A semantic score function is derived based on a score function, which integrates lexical, syntactic and semantic preference under a uniform formulation. The semantic score measure shows substantial improvement in structural disambiguation over a syntax-based approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In a large natural language processing system, such as a machine translation system (MTS), ambiguity resolution is a critical problem. Various rule-based and probabilistic approaches had been proposed to resolve various kinds of ambiguity problems on a case-by-case basis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In rule-based systems, a large number of rules are used to specify linguistic constraints for resolving ambiguity. Any parse that violates the semantic constraints is regarded as ungrammatical and rejected. Unfortunately, because every \"rule\" tends to have exception and uncertainty, and illformedness has significant contribution to the error rate of a large practical system, such \"hard rejection\" approaches fail to deal with these situations. A better way is to find all possible interpretations and place emphases on preference, rather than weU-formedness (e.g., [Wilks 83 ].) However, most of the known approaches for giving preference depend heavily on heuristics such as counting the number of constraint satisfactions. Therefore, most such preference measures can not be objectively justified. Moreover, it is hard and cosily to acquire, verify and maintain the consistency of the large fine-grained rule base by hand.", |
| "cite_spans": [ |
| { |
| "start": 568, |
| "end": 577, |
| "text": "[Wilks 83", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Probabilistic approaches greatly relieve the knowledge acquisition problem because they are usually trainable, consistent and easy to meet certain optimum criteria. They can also provide more objective preference measures for \"soft rejection.\" Hence, they are attractive for a large system. The current probabilistic approaches have a wide coverage including lexical analysis [DeRose 88, Church 88], syntactic analysis [Garside 87, Fujisaki 89, Su 88, 89, 91b], restricted semantic analysis [Church 89, Liu 89, 90] , and experimental translation systems [Brown 90 ]. However, there is still no integrated approach for modeling the joint effects of lexical, syntactic and semantic information on preference evaluation.", |
| "cite_spans": [ |
| { |
| "start": 491, |
| "end": 502, |
| "text": "[Church 89,", |
| "ref_id": null |
| }, |
| { |
| "start": 503, |
| "end": 510, |
| "text": "Liu 89,", |
| "ref_id": null |
| }, |
| { |
| "start": 511, |
| "end": 514, |
| "text": "90]", |
| "ref_id": null |
| }, |
| { |
| "start": 554, |
| "end": 563, |
| "text": "[Brown 90", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "A generalized probabilistic semantic model (GPSM) will be proposed in this paper to overcome the above problems. In particular, an integrated formulation for lexical, syntactic and semantic knowledge will be used to derive the semantic score for semantic preference evaluation. Application of the model to structural disam-biguation is investigated. Preliminary experiments show about 10%-14% improvement of the semantic score measure over a model that uses syntactic information only.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In general, a particular semantic interpretation of a sentence can be characterized by a set of lexical categories (or parts of speech), a syntactic structure, and the semantic annotations associated with it. Among the-various interpretations of a sentence, the best choice should be the most probable semantic interpretation for the given input words. In other words, the interpretation that maximizes the following score function [Su 88, 89, 91b] or analysis score [Chen 91 ] is preferred: where (Lex,, Synj, Semi) refers to the kth set of lexical categories, the jth syntactic structure and the ith set of semantic annotations for the input Words. The three component functions are referred to as semantic score (Ssem), syntactic score (Ssyn) and lexical score (Stex), respectively. The global preference measure will be referred to as compositional score or simply as score. In particular, the semantic score accounts for the semantic preference on a given set of lexical categories and a particular syntactic structure for the sentence. Various formulation for the lexical score and syntactic score had been studied extensively in our previous works [Su 88, 89 ", |
| "cite_spans": [ |
| { |
| "start": 432, |
| "end": 439, |
| "text": "[Su 88,", |
| "ref_id": null |
| }, |
| { |
| "start": 440, |
| "end": 443, |
| "text": "89,", |
| "ref_id": null |
| }, |
| { |
| "start": 444, |
| "end": 448, |
| "text": "91b]", |
| "ref_id": null |
| }, |
| { |
| "start": 467, |
| "end": 475, |
| "text": "[Chen 91", |
| "ref_id": null |
| }, |
| { |
| "start": 1155, |
| "end": 1162, |
| "text": "[Su 88,", |
| "ref_id": null |
| }, |
| { |
| "start": 1163, |
| "end": 1165, |
| "text": "89", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preference Assignment Using Score Function", |
| "sec_num": "2." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "Score (Semi, Sgnj, Lexk, Words) --P (Semi, Synj, LezklWords) = P (SemilSynj, Lexk, Words) \u00d7 P (Syn I ILexk, Words) x P (LexklWords)", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Preference Assignment Using Score Function", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Given the formulation in Eqn. (1), first we will show how to extract the abstract objects (Semi, Synj, LexD from a semantic representation. In general, a particular interpretation of a sentence can be represented by an annotated syntax tree (AST), which is a syntax tree annotated with feature structures in the tree nodes. where fA is the feature structure associated with node A. Because an AST preserves both syntactic and semantic information, it can be converted to other deep structure representations easily. Therefore, without lose of generality, the AST representation will be used as the canonical form of semantic representation for preference evaluation. The techniques used here, of course, can be applied to other deep structure representations as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Canonical Form of Semantic Representation", |
| "sec_num": null |
| }, |
| { |
| "text": "//-..< (Semi) for the input words wl\" ({wl ... wn}). A good encoding scheme for the Fi's will allow us to take semantic information into account without using redundant information. Hence, we will show how to annotate a syntax tree so that various interpretations can be characterized differently.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A[~]", |
| "sec_num": null |
| }, |
| { |
| "text": "B[fB] C[fc] D[fD] E[fE] F[fF] G[fc] C I C2 C 3 C4 (wl) (w2) (w3) (w4) Ls={A } L7={B, C } L~={B, F, G } Ls={B, F,c4} L4={B, c3, c4} L3={D,E ,c3,ca} L2={D, c2, c3, c4} L1 ={Cl, C2, C3, C4 }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A[~]", |
| "sec_num": null |
| }, |
| { |
| "text": "A popular linguistic approach to annotate a tree is to use a unification-based mechanism. However, many information irrelevant to disambiguation might be included. An effective encoding scheme should be simple yet can preserve most discrimination information for disambiguation. Such an encoding scheme can be accomplished by associating each phrase struc- , where fj = HeadFeature X~7~,~) is the (primary) head feature of its j-th head (i.e., Xij) in the head list. Non-head features of a child node Xij will not be percolated up to its mother node. The head feature of ~ itself, in this case, is fx. For a terminal node, the head feature will be the semantic tag of the corresponding lexical item; other features in the N-tuple will be tagged as ~b (NULL). Figure 2 shows two possible annotated syntax trees for the sentence \"... saw the boy in the park.\" For instance, the \"loc(ation)\" feature of \"park\" is percolated to its mother NP node as the head feature; it then serves as the secondary head feature of its grandmother node PP, because the NP node is the secondary head of PP. Similarly, the VP node in the left tree is annotated as VP(sta,anim) according to its primary head saw(sta,q~) and secondary head NP(anim,in). The VP(sta,in) node in the fight tree is tagged differently, which reflects different attachment preference of the prepositional phrase.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 759, |
| "end": 767, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic Tagging", |
| "sec_num": null |
| }, |
| { |
| "text": "ture rule A --+ X1X2... XM with a head list (Xi,,Xi,...XiM). The", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Tagging", |
| "sec_num": null |
| }, |
| { |
| "text": "By this simple mechanism, the major characteristics of the children, namely the head features, can be percolated to higher syntactic levels, and their correlation and dependency can be taken into account in preference evaluation even if they are far apart. In this way, different interpretations will be tagged differently. The preference on a particular interpretation can thus be evaluated from the distribution of the annotated syntax trees. Based on the above semantic tagging scheme, a semantic score will be proposed to evaluate the semantic preference on various interpretations for a sentence. Its performance improvement over syntactic score [Su 88, 89 , 91b] will be investigated.", |
| "cite_spans": [ |
| { |
| "start": 651, |
| "end": 658, |
| "text": "[Su 88,", |
| "ref_id": null |
| }, |
| { |
| "start": 659, |
| "end": 661, |
| "text": "89", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Tagging", |
| "sec_num": null |
| }, |
| { |
| "text": "Consequently, a brief review of the syntactic score evaluation method is given before going into details of the semantic score model. (See the cited references for details.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Tagging", |
| "sec_num": null |
| }, |
| { |
| "text": "According to Eqn. (2), the syntactic score can be formulated as follows [Su 88, 89 , 91b]:", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 79, |
| "text": "[Su 88,", |
| "ref_id": null |
| }, |
| { |
| "start": 80, |
| "end": 82, |
| "text": "89", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic Score", |
| "sec_num": "4." |
| }, |
| { |
| "text": "(3) fti = HP(LtlL~-',c~,w~) To avoid the normalization problem [Su 91b ] arisen from different number of transition probabilities for different syntax trees, an alternative formulation of the syntactic score is to evaluate the transition probabilities between configuration changes of the parser. For instance, the configuration of an LR parser is defined by its stack contents and input buffer. For the AST in Figure 1 , the parser configurations after the read of cl, c2, c3, c4 and $ (end-of-sentence) are equivalent to L1, L2, L4, 1-.5 and Ls, respectively. Therefore, the syntactic score can be approximated as [Su 89 , 91b]:", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 70, |
| "text": "[Su 91b", |
| "ref_id": null |
| }, |
| { |
| "start": 617, |
| "end": 623, |
| "text": "[Su 89", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 411, |
| "end": 420, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "S,y,, =_ P(SynilLeZk,W'~) = P(L'~lc'~,w~)", |
| "sec_num": null |
| }, |
| { |
| "text": "In this way, the number of transition probabilities in the syntactic scores of all AST's will be kept the same as the sentence length.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S, vn ~ P(Ls, LT'\" L2IL,) (4) P(LslL~) x P(LsIL4) x P(L41L2) x P(L21L1)", |
| "sec_num": null |
| }, |
| { |
| "text": "Semantic score evaluation is similar to syntactic score evaluation. From Eqn. (2), we have the following semantic model for semantic score: Figure 2 . The annotations of the context am ignored in evaluating Eqn. (6) due to the assumption of semantics compositionality. The operation mode will be called LLRR+Alv, where N is the dimension of the N-tuple, and the subscript L (or R) refers to the size of the context window. With an appropriate N, the score will provide sufficient discrimination power for general disambiguation problem without resorting to full-blown semantic analysis.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 140, |
| "end": 148, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic Score", |
| "sec_num": "5." |
| }, |
| { |
| "text": "where At = At (ft,l,fln,...,fuv) is the annotated version of At, whose semantic N-tuple is (fl,1, fl,2,-\", ft,N), and 57, fit are the annotated context symbols. Only Ft.1 is assumed to be significant for the transition to Ft in the last equation, because all required information is assumed to have been percolated to Ft-j through semantics composition.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 14, |
| "end": 32, |
| "text": "(ft,l,fln,...,fuv)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic Score", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Each term in Eqn. (5) can be interpreted as the probability thatAt is annotated with the particular set of head features (fs,1, ft,2,..., fI,N) , given that X1 ... XM are reduced to At in the context of a7 and fit. So it can be interpreted informally as P (At (fl,1, ft,2, . . . , fz ~v) I Ai ~ X1. . . XM , in the context of ~-7, fit ). It corresponds to the semantic preference assigned to the annotated node A t\" Since (11,1, fl,~,\"\" ft,N) are the head features from various heads of the substructures of A, each term reflects the feature co-occurrence preference among these heads. Furthermore, the heads could be very far apart. This is different from most simple Markov models, which can deal with local constraints only. Hence, such a formulation well characterizes long distance dependency among the heads, and provides a simple mechanism to incorporate the feature co-occurrence preference among them. For the semantic N-tuple model, the semantic score can thus be expressed as follows: ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 121, |
| "end": 143, |
| "text": "(fs,1, ft,2,..., fI,N)", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 256, |
| "end": 274, |
| "text": "(At (fl,1, ft,2, .", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic Score", |
| "sec_num": "5." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "S~.~", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Semantic Score", |
| "sec_num": "5." |
| }, |
| { |
| "text": "As mentioned before, not all constituents are equally important for disambiguation. For instance, head words are usually more important than modifiers in determining the compositional semantic features of their mother node. There is also lots of redundancy in a sentence. For instance, \"saw boy in park\" is equally recognizable as \"saw the boy in the park.\" Therefore, only a few categories, including verbs, nouns, adjectives, prepositions and adverbs and their projections (NP, VP, AP, PP, ADVP), are used to carry semantic features for disambiguation. These categories are roughly equivalent to the major categories in linguistic theory [Sells 85 ] with the inclusion of adverbs as the only difference.", |
| "cite_spans": [ |
| { |
| "start": 640, |
| "end": 649, |
| "text": "[Sells 85", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Major Categories and Semantic Features", |
| "sec_num": "6." |
| }, |
| { |
| "text": "The semantic feature of each major category is encoded with a set of semantic tags that well describes each category. A few rules of thumb are used to select the semantic tags. In particular, semantic features that can discriminate different linguistic behavior from different possible semantic N-tuples are preferred as the semantic tags. With these heuristics in mind, the verbs, nouns, adjectives, adverbs and prepositions are divided into 22, 30, 14, 10 and 28 classes, respectively. For example, the nouns are divided into \"human,\" \"plant,\" \"time,\" \"space,\" and so on. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Major Categories and Semantic Features", |
| "sec_num": "6." |
| }, |
| { |
| "text": "The semantic N-tuple model is used to test the improvement of the semantic score over syntactic score in structure disambiguation. Eqn. (3) is adopted to evaluate the syntactic score in L2RI mode of operation. The semantic score is derived from Eqn. (6) in L2R~ +AN mode, for N = 1, 2, 3, 4, where N is the dimension of the semantic", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Test and Analysis", |
| "sec_num": "7." |
| }, |
| { |
| "text": "A total of 1000 sentences (including 3 unambiguous ones) are randomly selected from 14 computer manuals for training or testing. They are divided into 10 parts; each part contains 100 sentences. In close tests, 9 parts are used both as the training set and the testing set. In open tests, the rotation estimation approach [Devijver 82 ] is adopted to estimate the open test performance. This means to iteratively test one part of the sentences while using the remaining parts as the training set. The overall performance is then estimated as the average performance of the 10 iterations.", |
| "cite_spans": [ |
| { |
| "start": 322, |
| "end": 334, |
| "text": "[Devijver 82", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S-tuple.", |
| "sec_num": null |
| }, |
| { |
| "text": "The performance is evaluated in terms of Top-N recognition rate (TNRR), which is defined as the fraction of the test sentences whose preferred interpretation is successfully ranked in the first N candidates. Table 1 shows the simulation resuits of close tests. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 208, |
| "end": 215, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "S-tuple.", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, a generalized probabilistic semantic model (GPSM) is proposed to assign semantic preference to ambiguous interpretations. The semantic model for measuring preference is based on a score function, which takes lexical, syntactic and semantic information into consideration and optimizes the joint preference. A simple yet effective encoding scheme and semantic tagging procedure is proposed to characterize various interpreta-183 tions in an N dimensional feature space. With this encoding scheme, one can encode the interpretations with discriminative features, and take the feature co-occurrence preference among various constituents into account. Unlike simple Markov models, long distance dependency can be managed easily in the proposed model. Preliminary tests show substantial improvement of the semantic score measure over syntactic score measure. Hence, it shows the possibility to overcome the ambiguity resolution problem without resorting to full-blown semantic analysis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "8." |
| }, |
| { |
| "text": "With such a simple, objective and trainable formulation, it is possible to take high level semantic knowledge into consideration in statistic sense. It also provides a systematic way to construct a disambiguation module for large practical machine translation systems without much human intervention; the heavy burden for the linguists to write fine-grained \"rules\" can thus be relieved.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "8." |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A Statistical Approach to Machine Translation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Brown", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computational Linguistics", |
| "volume": "16", |
| "issue": "2", |
| "pages": "79--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brown, P. et al., \"A Statistical Ap- proach to Machine Translation,\" Computational Linguistics, vol. 16, no. 2, pp. 79-85, June 1990.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "ArchTran: A Corpus-Based Statistics-Oriented English-Chinese Machine Translation System", |
| "authors": [ |
| { |
| "first": "S.-C", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "J.-S", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "J.-N", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "K.-Y.", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of Machine Translation Summit 11I", |
| "volume": "", |
| "issue": "", |
| "pages": "33--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, S.-C., J.-S. Chang, J.-N. Wang and K.-Y. Su, \"ArchTran: A Corpus-Based Statistics-Oriented English-Chinese Machine Translation System,\" Proceedings of Machine Translation Summit 11I, pp. 33-40, Washing- ton, D.C., USA, July 1-4, 1991.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Syntactic Ambiguity Resolution Using A Discrimination and Robustness Oriented Adaptive Leaming Algorithm", |
| "authors": [ |
| { |
| "first": "T.-H", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Y.-C", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "K.-Y.", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "14th Int. Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "20--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chiang, T.-H., Y.-C. Lin and K.-Y. Su, \"Syntactic Ambiguity Resolution Using A Discrimination and Robustness Oriented Adap- tive Leaming Algorithm\", to appear in Pro- ceedings of COLING-92, 14th Int. Conference on Computational Linguistics, Nantes, France, 20-28 July, 1992.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "ACL Proc. 2nd Conf. on Applied Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "9--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Church 88] Church, K., \"A Stochastic Parts Pro- gram and Noun Phrase Parser for Unrestricted Text,\" ACL Proc. 2nd Conf. on Applied Natu- ral Language Processing, pp. 136-143, Austin, Texas, USA, 9-12 Feb. 1988.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Word Association Norms, Mutual Information, and Lexicography", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Hanks", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Proc. 27th Annual Meeting of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "26--29", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Church 89] Church, K. and P. Hanks, \"Word As- sociation Norms, Mutual Information, and Lex- icography,\" Proc. 27th Annual Meeting of the ACL, pp. 76-83, University of British Colum- bia, Vancouver, British Columbia, Canada, 26- 29 June 1989.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Grammatical Category Disambiguation by Statistical Optimization", |
| "authors": [ |
| { |
| "first": "Steverl", |
| "middle": [ |
| "J" |
| ], |
| "last": "Derose", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Computational Linguistics", |
| "volume": "14", |
| "issue": "1", |
| "pages": "31--39", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "DeRose, SteverL J., \"Grammatical Category Disambiguation by Statistical Opti- mization,\" Computational Linguistics, vol. 14, no. 1, pp. 31-39, 1988.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A Probabilistic Parsing Method for Sentence Disambiguation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "A" |
| ], |
| "last": "Devijver", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Kittler ; Fujisaki", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Jelinek", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Cocke", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Black", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Nishino", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "Proc. of Int. Workshop on Parsing Technologies (IWPT-89)", |
| "volume": "", |
| "issue": "", |
| "pages": "28--31", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Devijver, P.A., and J. Kittler, Pattern Recognition: A Statistical Approach, Prentice-Hall, London, 1982. [Fujisaki 89] Fujisaki, T., F. Jelinek, J. Cocke, E. Black and T. Nishino, \"A Probabilistic Parsing Method for Sentence Disambiguation,\" Proc. of Int. Workshop on Parsing Technologies (IWPT- 89), pp. 85-94, CMU, Pittsburgh, PA, U.S.A., 28-31 August 1989.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "The Computational Analysis of English: A Corpus-Based Approach", |
| "authors": [], |
| "year": 1987, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Garside, Roger, Geoffrey Leech and Geoffrey Sampson (eds.), The Computational Analysis of English: A Corpus-Based Approach, Longman Inc., New York, 1987.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "On the Resolution of English PP Attachment Problem with a Probabilistic Semantic Model", |
| "authors": [ |
| { |
| "first": "C.-L", |
| "middle": [ |
| ";" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "O C" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liu, C.-L., On the Resolution of English PP Attachment Problem with a Probabilistic Se- mantic Model, Master Thesis, National Tsing Hua University, Hsinchu, TAIWAN, R.O.C., 1989.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The Semantic Score Approach to the Disambiguation of PP Attachment Problem", |
| "authors": [ |
| { |
| "first": "C.-L", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "J.-S", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "K.-Y", |
| "middle": [], |
| "last": "Su ; Taipei", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "O C" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Proc. of \u2022 ROCLING-III", |
| "volume": "", |
| "issue": "", |
| "pages": "253--270", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liu, C.-L, J.-S. Chang and K.-Y. Su, \"The Semantic Score Approach to the Disam- biguation of PP Attachment Problem,\" Proc. of \u2022 ROCLING-III, pp. 253-270, Taipei, R.O.C., September 1990.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Lectures On Contemporary Syntactic Theories: An Introduction to Government-Binding Theory, Generalized Phrase Structure Grammar, and Lexical-Functional Grammar", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Sells", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "CSLI Lecture Notes Number", |
| "volume": "3", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Sells 85] Sells, Peter, Lectures On Con- temporary Syntactic Theories: An Introduc- tion to Government-Binding Theory, General- ized Phrase Structure Grammar, and Lexical- Functional Grammar, CSLI Lecture Notes Number 3, Center for the Study of Language and Information, Leland Stanford Junior Uni- versity., 1985.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Semantic and Syntactic Aspects of Score Function", |
| "authors": [ |
| { |
| "first": "K.-Y", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "J.-S", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proc. of COLING-88", |
| "volume": "2", |
| "issue": "", |
| "pages": "642--644", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Su, K.-Y. and J.-S. Chang, \"Semantic and Syntactic Aspects of Score Function,\" Proc. of COLING-88, vol. 2, pp. 642-644, 12th Int.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Conf. on Computational Linguistics", |
| "authors": [], |
| "year": 1988, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "22--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Conf. on Computational Linguistics, Budapest, Hungary, 22-27 August 1988.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A Sequential Truncation Parsing Algorithm Based on the Score Function", |
| "authors": [ |
| { |
| "first": "K.-Y", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "J.-N", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "M.-H", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "J.-S", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Proc. of Int. Workshop on Parsing Technologies (IWPT-89)", |
| "volume": "", |
| "issue": "", |
| "pages": "28--31", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Su, K.-Y., J.-N. Wang, M.-H. Su and J.-S. Chang, \"A Sequential Truncation Parsing Algo- rithm Based on the Score Function,\" Proc. of Int. Workshop on Parsing Technologies (IWPT- 89), pp. 95-104, CMU, Pittsburgh, PA, U.S.A., 28-31 August 1989.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Some Key Issues in Designing MT Systems", |
| "authors": [ |
| { |
| "first": "K.-Y", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "J.-S", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Machine Translation", |
| "volume": "5", |
| "issue": "4", |
| "pages": "265--300", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Su, K.-Y. and J.-S. Chang, \"Some Key Issues in Designing MT Systems,\" Machine Translation, vol. 5, no. 4, pp. 265-300, 1990.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Robusmess and Discrimination Oriented Speech Recognition Using Weighted HMM and Subspace Projection Approach", |
| "authors": [ |
| { |
| "first": "K.-Y", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "C.-H", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of IEEE ICASSP-91", |
| "volume": "1", |
| "issue": "", |
| "pages": "541--544", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Su, K.-Y., and C.-H. Lee, \"Robusmess and Discrimination Oriented Speech Recog- nition Using Weighted HMM and Subspace Projection Approach,\" Proceedings of IEEE ICASSP-91, vol. 1, pp. 541-544, Toronto, On- tario, Canada. May 14-17, 1991.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "GLR Parsing with Scoring", |
| "authors": [ |
| { |
| "first": "K.-Y", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "J.-N", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "M.-H", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "J.-S", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Generalized LR Parsing", |
| "volume": "", |
| "issue": "", |
| "pages": "93--112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Su, K.-Y., J.-N. Wang, M.-H. Su, and J.- S. Chang, \"GLR Parsing with Scoring\". In M. Tomita (ed.), Generalized LR Parsing, Chapter 7, pp. 93-112, Kluwer Academic Publishers, 1991.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Preference Semantics, Ul-Formedness, and Metaphor", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "A" |
| ], |
| "last": "Wilks", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "AJCL", |
| "volume": "9", |
| "issue": "3-4", |
| "pages": "178--187", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wilks 83] Wilks, Y. A., \"Preference Semantics, Ul-Formedness, and Metaphor,\" AJCL, vol. 9, no. 3-4, pp. 178 -187, July -Dec. 1983.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Figure 1shows an example of AST. The annotated version of a node A is denoted as A = A[fa] in thefigure," |
| }, |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Annotated Syntax Tree (AST) and Phrase Levels (PL).The hierarchical AST can be represented by a set of phrase levels, such as L] through L8 inFigure 1. Formally, a phrase level (PL) is a set of symbols corresponding to a sententialform of the sentence. The phrase levels inFigure 1are derived from a sequence of rightmost derivations, which is commonly used in an LR parsing mechanism. For example, 1-,5 and L4 correspond to the rightmost derivation B F Ca ~+ B c3 c4. Noterm that the first phrase level L] consists of all lexical categories cl ... cn of the terminal words (wl ... w,,). A phrase level with each symbol annotated with its feature structure is called an annotated phrase level (APL). The i-th APL is denoted as Fi. For example, L5 in Figure 1 has an annotated phrase level F5 = {B [fB], F [fF], c4 [fc,]} as its counterpart, where fc, is the atomic feature of the lexical category c4, which comes from the lexical item of the 4th word w4. With the above notations, the score function can be re-formulated as follows: Score (Semi, Synj , Lexk, Words) -P (FT, L 7, c~ I,o7) \" (a short form for {cl ... c,,}) is the kth set of lexical categories (Lexk), /-,1\" ({L] ... Lr,,}) is the jth syntactic structure (Synj), and rl m ({F1 ... Fro}) is the ith set of semantic annotations" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "head list is formed by arranging the children nodes (X1,X2,...,XM) in descending order of importance to the compositional semantics of their mother node A. For this reason, Xi~, Xi~ and Xi, are called the primary, secondary and the j-th heads of A, respectively.The compositional semantic features of the mother node A can be represented as an ordered list of the feature structures of its children, where the order is the same as in the head list. For example, for S ~ NP VP, we have a head list (VP, NP), because VP is the (primary) head of the sentence. When composing the compositional semantics of S, the features of VP and NP will be placed in the first and second slots of the feature structure of S, respectively.Because not all children and all features in a feature structure am equally significant for disambiguation, it is not really necessary to annotatea node with the feature structures of all its children. Instead, only the most important N children of a node is needed in characterizing the node, and only the most discriminative feature of a child is needed to be passed to its mother node. In other words, an N-dimensional feature vector, called a semantic N-tuple, could be used to characterize a node without losing much information for disambiguation. The first feature in the semantic N-tuple comes from the primary head, and is thus called the head feature of the semantic Ntuple. The other features come from the other children in the order of the head list. (Compare these notions with the linguistic sense of head and head feature.) An annotated node can thus be approximated as A ,~ A(fl,f2,... ,fN)" |
| }, |
| "FIGREF3": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "\u00b0t(a-hlLz-h2) ~~(~-hl,~-h2) ot(\u00a2X-hl~t~sta,in~.._ ~(~--h~,~-h~) NP~ f~-~)saw(:ta:(~d)e~:,~)in(in~,def) the(def'~,\u00a2) boy(-y~(~m,\u00a2)in(in,~) ~ the(def#)/#)~p~par~Nk(loc,\u00a2) the(def,t~) park(loc#) Figure 2. Ambiguous PP attachment patterns annotated with semantic 2-tuples." |
| }, |
| "FIGREF4": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "I P (L, IL',-') ~\" II P(L'IL'-') = HP({o~t, A,, /3,} I{o,,, ~',}) where at, fit are the left context and right context under which the derivation At =~ X1X2... XM occurs. (Assume that Lt = {at, At,fit} and LI-1 = {at,X1,\"\" ,XM,fil}.) If L left context symbols in al and R right context symbols in fit are consulted to evaluate the syntactic score, it is said to operate in LLRR mode of operation. When the context is ignored, such an LoRo mode of operation reduces to a stochastic context-free grammar." |
| }, |
| "FIGREF5": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": ", em(Semi, Synj , Lex~:, Words) = p (p~n ILT, c~, w~) the semantic tags from the children of A1.For example, we have terms like e(VP(sta, anim) [ a, VP ~-v NP,fl) and P(VP(sta, in) la, Ve~v NP PP,fl),respecfively, for the left and right trees in" |
| }, |
| "FIGREF6": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "\"~ I-[ P ( A* (ft,,, f,,2 \" \" \" ft,N) la,,A, ,--Xl \" \" gM,/~l) l=2 181" |
| }, |
| "FIGREF7": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "These semantic classes come from a number of sources and the semantic attribute hierarchy of the ArchTran MTS [Su 90, Chen 91]." |
| }, |
| "TABREF2": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "" |
| }, |
| "TABREF3": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Score</td><td colspan=\"2\">Syntax</td><td colspan=\"2\">Semantics</td><td colspan=\"2\">Semantics</td></tr><tr><td/><td colspan=\"2\">(L2R1)</td><td colspan=\"2\">(L2RI+A1)</td><td colspan=\"2\">(L2RI+A2)</td></tr><tr><td>Rank</td><td colspan=\"2\">Count TNRR</td><td colspan=\"2\">Count TNRR</td><td colspan=\"2\">Count TNRR</td></tr><tr><td/><td/><td>(%)</td><td/><td>(%)</td><td/><td>(%)</td></tr><tr><td>1</td><td>781</td><td colspan=\"2\">87.07 872</td><td>97.21</td><td>866</td><td>96.54</td></tr><tr><td>2</td><td>101</td><td>98.33</td><td>20</td><td>99.44</td><td>24</td><td>99.22</td></tr><tr><td>3</td><td>9</td><td>99.33</td><td>5</td><td>100.00</td><td>4</td><td>99.67</td></tr><tr><td>4</td><td>5</td><td>99.89</td><td/><td/><td/><td/></tr><tr><td>5</td><td/><td/><td/><td/><td>2</td><td>99.89</td></tr><tr><td>13</td><td/><td/><td/><td/><td>1</td><td>100.00</td></tr><tr><td>18</td><td/><td>100.00</td><td/><td/><td/><td/></tr><tr><td/><td/><td colspan=\"3\">DataBase: 900 Sentences</td><td/><td/></tr><tr><td/><td/><td colspan=\"3\">Test Set: 897 Sentences</td><td/><td/></tr><tr><td colspan=\"7\">Total Number of Ambiguous Trees = 63233</td></tr><tr><td colspan=\"5\">(*) TNRR: Top-N Recognition Rate</td><td/><td/></tr></table>", |
| "text": "shows partial results for open tests (up to rank 5.) The recognition rates achieved by considering syntactic score only and semantic score only are shown in the tables. (L2RI+A3 and L2RI+A4 performance are the same as L2R~+A2 in the present test environment. So they are not shown in the tables.) Since each sentence has about 70-75 ambiguous constructs on the average, the task perplexity of the current disambiguation task is high." |
| }, |
| "TABREF4": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Score</td><td colspan=\"2\">Syntax</td><td colspan=\"2\">Semantics</td><td colspan=\"2\">Semantics</td></tr><tr><td/><td colspan=\"2\">(L2R1)</td><td colspan=\"2\">(L2RI+A1)</td><td colspan=\"2\">(L2RI+A2)</td></tr><tr><td colspan=\"3\">Rank Count TNRR</td><td colspan=\"2\">Count TNRR!</td><td colspan=\"2\">Count TNRR</td></tr><tr><td/><td/><td>(%)</td><td/><td>(%)</td><td/><td>(%)</td></tr><tr><td>1</td><td colspan=\"2\">430 43.13</td><td colspan=\"2\">569 57.07</td><td colspan=\"2\">578 57.97</td></tr><tr><td>2</td><td colspan=\"2\">232 66A0</td><td colspan=\"2\">163 73.42</td><td colspan=\"2\">167 74.72</td></tr><tr><td>3</td><td>94</td><td>75.83</td><td>90</td><td>82.45</td><td>75</td><td>82.25</td></tr><tr><td>4</td><td>80</td><td>83.85</td><td>50</td><td>87.46</td><td>49</td><td>87.16</td></tr><tr><td>5</td><td>35</td><td>87.36</td><td>22</td><td>89.67</td><td>28</td><td>89.97</td></tr><tr><td/><td/><td colspan=\"4\">DataBase: 900 Sentences (+)</td><td/></tr><tr><td/><td/><td colspan=\"4\">Test Set: 997 Sentences (++)</td><td/></tr><tr><td colspan=\"7\">Total Number of Ambiguous Trees = 75339</td></tr><tr><td colspan=\"7\">(+) DataBase : effective database size for rotation</td></tr><tr><td colspan=\"2\">estimation</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"7\">(++) Test Set : all test sentences participating the</td></tr><tr><td colspan=\"3\">rotation estimation test</td><td/><td/><td/><td/></tr></table>", |
| "text": "The close test Top-1 performance(Table 1)for syntactic score (87%) is quite satisfactory. When semantic score is taken into account, substantial improvement in recognition rate can be observed further (97%). This shows that the semantic model does provide an effective mechanism for disambiguation.The recognition rates in open tests, however, are less satisfactory under the present test environment. The open test performance can be attributed to the small database size and the estimation error of the parameters thus introduced. Because the training database is small with respect to the complexity of the model, a significant fraction of the probability entries in the testing set can not be found in the training set. As a result, the parameters are somewhat \"overtuned\" to the training database, and their values are less favorable for open tests. Nevertheless, in both close tests and open tests, the semantic score model shows substantial improvement over syntactic score (and hence stochastic context-free grammar). The improvement is about 10% for close tests and 14% for open tests. In general, by using a larger database and better robust estimation techniques [Su 91a, Chiang 92], the baseline model can be improved further. As we had observed from other experiments for spoken language processing [Su 91a], lexical tagging, and structure disambiguation [chiang 92], the performance under sparse data condition can be improved significantly if robust adaptive leaming techniques are used to adjust the initial parameters. Interested readers are referred to [Su 91a, Chiang 92] for more details." |
| } |
| } |
| } |
| } |